Release

Dify.AI v0.4.1 Release: The Model Runtime Restructure

We're rolling out a brand new Model Runtime architecture, replacing the pre-existing model interface built on LangChain.

Gu

DevRel

Written on

Jan 3, 2024

Share

Share to Twitter
Share to LinkedIn
Share to Hacker News

Release

·

Jan 3, 2024

Dify.AI v0.4.1 Release: The Model Runtime Restructure

We're rolling out a brand new Model Runtime architecture, replacing the pre-existing model interface built on LangChain.

Gu

DevRel

Share to Twitter
Share to LinkedIn
Share to Hacker News

Release

Dify.AI v0.4.1 Release: The Model Runtime Restructure

We're rolling out a brand new Model Runtime architecture, replacing the pre-existing model interface built on LangChain.

Gu

DevRel

Written on

Jan 3, 2024

Share

Share to Twitter
Share to LinkedIn
Share to Hacker News

Release

·

Jan 3, 2024

Dify.AI v0.4.1 Release: The Model Runtime Restructure

Share to Twitter
Share to LinkedIn
Share to Hacker News

Release

·

Jan 3, 2024

Dify.AI v0.4.1 Release: The Model Runtime Restructure

Share to Twitter
Share to LinkedIn
Share to Hacker News

Today, we're sharing some exciting changes happening in our codebase. We're rolling out a brand new Model Runtime architecture, replacing the pre-existing model interface built on LangChain. This change is all about streamlining our development process and making our codebase more robust and contributor-friendly. Let's break down what this means and why it's a big deal.

Model Runtime empowers developers to plug in any model, customized or predefined, all through an unified approach.

Out with the Old

We're saying goodbye to LangChain. It's a great scaffolding that has helped us tremendously in quickly shipping product features, but we think it's time to part ways. Why? Well, LangChain has been a bit of a wild card, often introducing breaking changes that throw a wrench in our workflow. It's also been brittle and not quite in line with our product logic. As we aim to open up certain aspects of our pipeline to allow for enhanced customization, LangChain components have proven to be restrictive, necessitating a shift towards a more tailored and adaptive framework. In short, it's been holding us back.

In comes the New

Model Runtime solves these issues. This isn't just a facelift; it's a full-on restructuring with some clear benefits:

  • Less Dependency: We're reducing our reliance on external packages that don't always play nice with our code.

  • Better Alignment: Model Runtime is designed to be in tune with our development needs, meaning less time fixing things and more time building cool features.

  • Open Source Friendly: With clearer structure and contribution guidelines, we're paving the way for more contributions from the open-source community -- more people teaming up to advance open source AI.

What's Changing?

We're focusing on a few key areas:

  • Frontend Independence: Models can now be fully defined and configured in the backend with no need for frontend changes. This decouples the model logic from the UI, leading to more modular code and faster development cycles for contributors.

  • Unified Interface: A single, streamlined interface for all model types. Whether it's text embedding or inference, it's all accessible in a consistent way.

class AnthropicLargeLanguageModel(LargeLanguageModel):
    
    # _invoke() is an universal entrypoint for interacting with all models.
    # for an LLM, this happens in the form of text generation.
    def _invoke():
        # handles input / output conversion, streaming, token count...
        # depending on the specific model   
    
    
    # ... other functions
  • Simpler Configuration with YAML: We've defined a custom DSL to declaritively configure providers and models, which brings a clear structure to the codebase. This allows for a standardized approach to adding new models. A provider declaration may then look like so, which is vastly more readable:

provider: anthropic
description:
  en_US: Anthropic’s powerful models, such as Claude 2 and Claude Instant.
supported_model_types:
- llm
#...
  credential_form_schemas:
  - variable: anthropic_api_key
    label:
      en_US: API Key
    type: secret-input
# ... other variables

Plus many other upgrades including:

  1. Support for all models compatible with OpenAI's API.

  2. A lot more built-in models added, including Google Gemini.

  3. More informative error returns during model callsFor built-in model providers, we've listed all supported models along with configuration guidance, making it straightforward for users to grasp what Dify supports.

  4. Complete freedom to expose and access model parameters.

  1. Better logging with LangSmith integration

If you're looking to connect models/providers to Dify, check out our contribution guide. Otherwise, pick from 15+ readily-supported model providers including OpenAI, Anthropic, and Azure, and start experimenting in no time.

Wrapping up

This restructure is a big step forward for us. It's about building architecture that's robust, highly-customizable, and future-ready. We're excited to see where this takes us➡️, and even more excited to see how the community jumps in to contribute.

Stay tuned for more updates, and as always, happy building 🛠️

On this page

    Related articles

    The Innovation Engine for Generative AI Applications

    The Innovation Engine for Generative AI Applications

    The Innovation Engine for Generative AI Applications