Company

Dify.AI x Jina AI:Dify now Integrates Jina Embedding Model

Jina AI's jina-embeddings-v2, with a unique 8,192 token context, enhances RAG applications in Dify.AI's platform. It outperforms standard 512-token models, enabling richer, context-aware AI solutions and simplified development.

Dify

Dify.AI

Jina

Jina.AI

Written on

Dec 5, 2023

Share

Share to Twitter
Share to LinkedIn
Share to Hacker News

Company

·

Dec 5, 2023

Dify.AI x Jina AI:Dify now Integrates Jina Embedding Model

Jina AI's jina-embeddings-v2, with a unique 8,192 token context, enhances RAG applications in Dify.AI's platform. It outperforms standard 512-token models, enabling richer, context-aware AI solutions and simplified development.

Dify

Dify.AI

Jina

Jina.AI

Share to Twitter
Share to LinkedIn
Share to Hacker News

Company

Dify.AI x Jina AI:Dify now Integrates Jina Embedding Model

Jina AI's jina-embeddings-v2, with a unique 8,192 token context, enhances RAG applications in Dify.AI's platform. It outperforms standard 512-token models, enabling richer, context-aware AI solutions and simplified development.

Dify

Dify.AI

Jina

Jina.AI

Written on

Dec 5, 2023

Share

Share to Twitter
Share to LinkedIn
Share to Hacker News

Company

·

Dec 5, 2023

Dify.AI x Jina AI:Dify now Integrates Jina Embedding Model

Share to Twitter
Share to LinkedIn
Share to Hacker News

Company

·

Dec 5, 2023

Dify.AI x Jina AI:Dify now Integrates Jina Embedding Model

Share to Twitter
Share to LinkedIn
Share to Hacker News

Text embeddings are key for building a Retrieval Augmented Generation (RAG) application. Embeddings model convert text into vectors, capturing textual semantics and enabling sophisticated natural language understanding.

In a RAG pipeline, the embedding model's context length is crucial for capturing context and meaning. Think of the Transformer as a computer: the LLM's context length is like RAM – bigger is better, yet it comes with higher costs. RAG functions similarly to external storage, enhancing this capacity efficiently. Thus, models (like the jina-embeddings-v2 family) that support longer texts are essential, enabling richer, more context-aware embeddings. 

Dify.AI integrated a variety of models from different sources in order to provide users with optimal AI solutions. With a context length of 8,192 tokens, jina-embeddings-v2 is unmatched among open-source models, most of which only support 512 tokens. By integrating Jina Embeddings models into the Dify platform, we are giving our users access to a best-in-class model that will enable more nuanced natural language understanding across a wide range of AI applications.

The large context window of jina-embeddings-v2 allows for more comprehensive processing of longer texts, leading to richer, more context-aware embeddings. This capability is crucial for achieving richer semantics and more advanced reasoning in AI applications.

According to LlamaIndex, which ranks embedding models for building RAG systems, the jina-embeddings-v2 model has emerged as a frontrunner, surpassing other notable models like Cohere's Embed v3 and OpenAI's text-embeddings-ada-002.

"By supporting over 8,000 tokens of context, it captures semantics and semantic relationships in unprecedented detail. We're thrilled that integration with Dify.AI will put this leading-edge technology into the hands of so many developers and businesses," said Han Xiao, CEO of Jina AI. 

Using Jina Embedding v2 in Dify Cloud

Incorporating Jina Embeddings into Dify Cloud is a user-friendly process. You can swiftly construct a RAG application using Dify Cloud in just a few moments.

Setting Up Jina Embeddings API Key

Get a Jina Embeddings API key from the Embedding API page and copy it.

Incorporating Jina Embeddings into Dify Cloud

Select the Jina Embeddings model from the list, and paste the API key.

Make it as the default Embedding Model in the System Model Settings.

Developing a RAG Application

Start the development process by focusing on the creation of 'Knowledge'. This involves preparing or sourcing the content that your RAG application will use to inform its responses or actions.

The Jina Embedding v2, which is a part of the RAG framework, will automatically be used to process this document or content. This model helps in understanding and categorizing the information within your Knowledge base.

Once your Knowledge base is established, the next step is to develop a chatbot. This chatbot will use the Knowledge you've created as its context, meaning it will draw information from this Knowledge base to respond to queries or perform tasks.

After developing the chatbot, you'll need to test and observe how it functions. This is done through 'Prompt Engineering', which means interacting with the chatbot using various prompts and observing its responses.This stage is crucial for understanding how well the chatbot utilizes the Knowledge base and for making necessary adjustments to improve its performance.

Examining the Conversation Log

Inspect and analyze the details of the chatbot's interactions, which are recorded in the Conversation Log. The Prompt log page will specifically show you the intricacies of how the embedded document content (from your Knowledge base) is being utilized in each interaction. This examination can reveal insights into the effectiveness of your Knowledge base and the embedding model, and how they are influencing the chatbot's responses.

How to Use the Community Edition?

In line with our mission to democratize AI technology, Dify.AI has made sure that Jina Embeddings v2 models are accessible through both our cloud platform and the Dify.AI community edition. This dual approach caters to a diverse range of users, from enterprises requiring robust cloud solutions to individual developers and researchers seeking open-source alternatives. 

Follow these steps to create your own AI applications using the Dify Community Edition.

Prerequisites

Deploying jina-embeddings-v2

To get started with the jina-embeddings-v2 model, simply head over to the Xinference page and find it under the Embedding Models section. Once there, activate the model to begin using it. You can always check the model's status and see it in action on the Running Models page. 

Adding Jina Embeddings Models to Dify

First, add the recently activated embedding model to the Xorbits Inference model provider.

Then, set it as the default embedding model in the System Model Settings.

Construct a chatbot using the Knowledge you just developed as its context.

Following these steps, you are now ready to develop a RAG application, following the same procedure as the cloud version. 

Conclusion

In addition to the Jina embedding model referenced in this article, we also support embedding models from OpenAI and ZHIPU (ChatGLM). We aim to collaborate more with exceptional partners, offering developers a more open platform with a variety of curated tools. Our efforts are focused on enhancing the architecture of the community edition, making it easy for contributors to integrate high-quality models and undertake Dify projects in Dify projects. For further information, we invite you to reach out to us via our community.

For more information about Dify.AI’s offerings, check out the Dify.AI website or join our community on Discord.

Join the Dify.AI Discord Server!

On this page

    The Innovation Engine for Generative AI Applications

    The Innovation Engine for Generative AI Applications

    The Innovation Engine for Generative AI Applications