Release

From Basic to Expert: Mastering the New Prompt Orchestration in Dify.AI

Dify.AI introduces a new "Expert Mode" in its prompt orchestration page, offering advanced customization and control for professional developers and prompt engineers over prompt orchestration. While "Basic Mode" simplifies application creation, "Expert Mode" provides a wide range of tools and options for designing complex prompt orchestrations, enabling users to fine-tune AI applications for optimal performance. It includes features like customized prompts, different models (CHAT and COMPLETE) for various requirements, and a 'Log View' feature for debugging.

Dify

Dify.AI

Written on

Oct 23, 2023

Share

Share to Twitter
Share to LinkedIn
Share to Hacker News

Release

·

Oct 23, 2023

From Basic to Expert: Mastering the New Prompt Orchestration in Dify.AI

Dify.AI introduces a new "Expert Mode" in its prompt orchestration page, offering advanced customization and control for professional developers and prompt engineers over prompt orchestration. While "Basic Mode" simplifies application creation, "Expert Mode" provides a wide range of tools and options for designing complex prompt orchestrations, enabling users to fine-tune AI applications for optimal performance. It includes features like customized prompts, different models (CHAT and COMPLETE) for various requirements, and a 'Log View' feature for debugging.

Dify

Dify.AI

Share to Twitter
Share to LinkedIn
Share to Hacker News

Release

From Basic to Expert: Mastering the New Prompt Orchestration in Dify.AI

Dify.AI introduces a new "Expert Mode" in its prompt orchestration page, offering advanced customization and control for professional developers and prompt engineers over prompt orchestration. While "Basic Mode" simplifies application creation, "Expert Mode" provides a wide range of tools and options for designing complex prompt orchestrations, enabling users to fine-tune AI applications for optimal performance. It includes features like customized prompts, different models (CHAT and COMPLETE) for various requirements, and a 'Log View' feature for debugging.

Dify

Dify.AI

Written on

Oct 23, 2023

Share

Share to Twitter
Share to LinkedIn
Share to Hacker News

Release

·

Oct 23, 2023

From Basic to Expert: Mastering the New Prompt Orchestration in Dify.AI

Share to Twitter
Share to LinkedIn
Share to Hacker News

Release

·

Oct 23, 2023

From Basic to Expert: Mastering the New Prompt Orchestration in Dify.AI

Share to Twitter
Share to LinkedIn
Share to Hacker News

Since its inception, Dify.AI has been on a quest to offer developers a more flexible and higher degree of control over prompt orchestration. To achieve this, we've unveiled a new orchestration mode. By simply switching from 'Basic Mode' to 'Expert Mode' on the prompt orchestration page, you embark on a fresh journey of prompt orchestration.

Discover New Ways to Explore Expert Mode in Prompt Orchestration

Mode Overview

Expert Mode is designed for professional developers and prompt engineers, providing a highly flexible customized orchestration mode. This mode assists in crafting effective prompts for robust and reliable interaction with LLMs or datasets.

From Basic to Expert

Basic Mode allows users to easily configure and create basic applications, while Expert Mode grants you more control and customization options. In Expert Mode, you are free to edit prompts including Context, User Input, Conversation History, examples, and other prompt elements, thereby guiding the model to achieve desired output results. Whether it's a conversational application or a text generation application, Expert Mode offers a plethora of options and tools to help you design more complex prompt orchestration.

Basic Mode was designed to lower the entry barrier for users to create AI applications by encapsulating some prompts, which to some extent, limits the autonomy in orchestration. If you've orchestrated prompts in 'Basic Mode' and then switch to 'Expert Mode' after uploading datasets, you can see the complete prompts encapsulated in Basic Mode, and at this point, you are free to modify any part of it. This enhances the autonomy in orchestration, allowing the LLM to output in an ideal state, and swiftly tuning the AI application to achieve optimal effects.

Functional Configuration in Expert Mode

In Expert Mode, you have the freedom to customize prompts. If you choose the CHAT model, you can write prompts for three kinds of message types: SYSTEM / USER / ASSISTANT. If you opt for the COMPLETE model, you can flexibly adjust the blocks in prompts like Context, Conversation History, Query, and Variables to make the application more in line with requirements, whereas in CHAT model, you can only adjust Context and Variables.

In the case of selecting the CHAT model, you can write prompts for the three aforementioned message types (SYSTEM / USER / ASSISTANT). By orchestrating USER and ASSISTANT interaction information in a modular manner, you can guide the model to achieve expected output.

Taking an application that helps users generate multiple QA pairs from text as an example, by providing multiple sets of USER and ASSISTANT interaction examples, you can provide clear guidance to the model, ensuring it strictly adheres to SYSTEM prompt constraints during the answering process. It's akin to setting a template, allowing the model to output results in a fixed format.

In this manner, you can also write specific prompts for various other scenarios, thereby achieving precise guidance for AI application responses, further improving the efficiency and accuracy of prompt orchestration.

You can easily switch between CHAT and COMPLETE models to find the best application scenario. During the detailed prompt debugging process, the 'Log View' feature allows you to delve into the entire process from input to output, locating the issues. Whether it's model parsing errors or prompt quality issues, they can be discovered and adjusted in time, thereby optimizing application performance and ensuring the quality of output.

For more details and guidelines on Expert Mode in prompt orchestration, please refer to the official documentation.

Coming Soon

  • New Content Review feature.

Supports filtering of sensitive words in user input and model output, making your AI application content generation more secure and controlled.

  • External Tool API Calls.

Prompt orchestration now supports external tool API calls, allowing the insertion of API query results into prompts, enhancing the extensibility of application orchestration.


via @dify_ai

If you like Dify, give it a Star ⭐️.

On this page

    Related articles

    The Innovation Engine for Generative AI Applications

    The Innovation Engine for Generative AI Applications

    The Innovation Engine for Generative AI Applications