질문: dify 매뉴얼을 찾아서 dify 내에서 mcp를 server로 구현하는 방법을 알려줘
Dify에서 MCP(Model Context Protocol) 서버를 구현하는 방법에 대한 정보를 찾아보겠습니다.
C
resolve-library-id
Request
{ `libraryName`: `dify` }
Response
Available Libraries (top matches): Each result includes: - Library ID: Context7-compatible identifier (format: /org/project) - Name: Library or package name - Description: Short summary - Code Snippets: Number of available code examples - Trust Score: Authority indicator - Versions: List of versions if available. Use one of those versions if and only if the user explicitly provides a version in their query. For best results, select libraries based on name match, trust score, snippet coverage, and relevance to your use case. ---------- - Title: Dify - Context7-compatible library ID: /langgenius/dify - Description: Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production. - Code Snippets: 604 - Trust Score: 8.9 ---------- - Title: Dify - Context7-compatible library ID: /context7/dify_ai - Description: Dify is an open-source platform that combines Backend-as-a-Service and LLMOps to streamline the development of generative AI applications, offering intuitive prompt orchestration, RAG engines, and an AI Agent framework. - Code Snippets: 1633 - Trust Score: 7.5 ---------- - Title: Dify - Context7-compatible library ID: /langgenius/dify-docs - Description: The open-source repo for docs.dify.ai - Code Snippets: 1438 - Trust Score: 8.9 ---------- - Title: Dify - Context7-compatible library ID: /context7/docs_dify_ai-zh-hans - Description: Dify is an open-source LLM application development platform that combines Backend as Service and LLMOps. It enables developers and non-technical users to quickly build and operate production-grade generative AI applications. - Code Snippets: 71 - Trust Score: 7.5 ---------- - Title: Dify - Context7-compatible library ID: /langgenius/dify-docs-mintlify - Description: - Code Snippets: 1208 - Trust Score: 8.9 ---------- - Title: Dify for DSL - Context7-compatible library ID: /wwwzhouhui/dify-for-dsl - Description: 本项目是基于dify开源项目实现的dsl工作流脚本合集 - Code Snippets: 110 - Trust Score: 9.1 ---------- - Title: Dify AI Provider - Context7-compatible library ID: /warmwind/dify-ai-provider - Description: Dify provider for Vercel AI SDK - Code Snippets: 9 - Trust Score: 7.8 ---------- - Title: Dify Plugin SDKs - Context7-compatible library ID: /langgenius/dify-plugin-sdks - Description: SDK for Dify plugins - Code Snippets: 25 - Trust Score: 8.9 ---------- - Title: Dify Plugin Daemon - Context7-compatible library ID: /langgenius/dify-plugin-daemon - Description: - Code Snippets: 6 - Trust Score: 8.9 ---------- - Title: Dify Plugin TOD Agent - Context7-compatible library ID: /svcvit/dify-plugin-tod_agent - Description: A powerful task-oriented dialogue agent that can collect information through structured conversations. It supports dynamic field validation, multi-field information extraction, and state management. - Code Snippets: 3 - Trust Score: 9.6 ---------- - Title: Nodify - Context7-compatible library ID: /miroiu/nodify - Description: Highly performant and modular controls for node-based editors designed for data-binding and MVVM. - Code Snippets: 987 - Trust Score: 9 ---------- - Title: Botkit - Context7-compatible library ID: /fedify-dev/botkit - Description: A framework for creating ActivityPub bots - Code Snippets: 192 - Trust Score: 6.7 ---------- - Title: Fedify - Context7-compatible library ID: /fedify-dev/fedify - Description: ActivityPub server framework in TypeScript - Code Snippets: 765 - Trust Score: 6.7 ---------- - Title: Fedify (llmstxt) - Context7-compatible library ID: /llmstxt/fedify_dev-llms.txt - Description: Fedify is a TypeScript library for building federated server applications powered by ActivityPub and other standards. - Code Snippets: 413 - Trust Score: 8
가장 관련성이 높은 Dify 공식 문서를 찾겠습니다. /langgenius/dify-docs가 공식 문서로 보이므로 이를 사용하여 MCP 관련 정보를 찾아보겠습니다.
C
get-library-docs
Request
{ `topic`: `mcp server implementation`, `tokens`: 15000, `context7CompatibleLibraryID`: `/langgenius/dify-docs` }
Response
======================== CODE SNIPPETS ======================== TITLE: Initialize and Configure Dify MCP Server DESCRIPTION: After cloning the repository, navigate into the `dify-docs` directory to initialize the MCP server. The `npx mint-mcp add` command creates a local MCP server instance, analyzes and indexes the documentation content, and configures necessary dependencies. This process may take several minutes depending on network conditions. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/learn-more/extended-reading/dify-docs-mcp.mdx#_snippet_1 LANGUAGE: bash CODE: ``` cd dify-docs npx mint-mcp add dify-6c0370d8 ``` ---------------------------------------- TITLE: Configure Zapier MCP Server URL for Dify Workflow DESCRIPTION: This JSON structure serves as a template for configuring the Zapier MCP server within a Dify Workflow's Agent node. Users must replace the "url" field with their specific MCP Server address. This configuration enables the Dify Agent to execute tasks, such as sending emails, by interacting with the configured Zapier MCP Server. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/plugins/best-practice/how-to-use-mcp-zapier.mdx#_snippet_1 LANGUAGE: json CODE: ``` { "server_name": { "url": "https://actions.zapier.com/mcp/*******/sse", "headers": {}, "timeout": 5, "sse_read_timeout": 300 } } ``` ---------------------------------------- TITLE: Clone Dify Documentation Repository DESCRIPTION: This step involves cloning the Dify documentation repository from GitHub to your local machine. This repository will serve as the primary data source for the Model Context Protocol (MCP) server, enabling the AI to access and process Dify's documentation content. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/learn-more/extended-reading/dify-docs-mcp.mdx#_snippet_0 LANGUAGE: bash CODE: ``` git clone https://github.com/langgenius/dify-docs.git ``` ---------------------------------------- TITLE: Configure MCP SSE Plugin with Zapier Server URL in Dify DESCRIPTION: This JSON snippet illustrates the required format for configuring the MCP SSE plugin in Dify. It specifies the Zapier MCP Server URL, along with optional headers and timeout settings for the connection, enabling Dify to communicate with Zapier's MCP service. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/plugins/best-practice/how-to-use-mcp-zapier.mdx#_snippet_0 LANGUAGE: JSON CODE: ``` { "server_name": { "url": "https://actions.zapier.com/mcp/*******/sse", "headers": {}, "timeout": 5, "sse_read_timeout": 300 } } ``` ---------------------------------------- TITLE: Implement LLM Invocation Core Method DESCRIPTION: This method implements the core logic for invoking a Large Language Model, supporting both streaming and synchronous responses. It handles various parameters including model name, credentials, prompt messages, and optional tools or stop sequences. SOURCE: https://github.com/langgenius/dify-docs/blob/main/ja-jp/guides/model-configuration/interfaces.mdx#_snippet_2 LANGUAGE: python CODE: ``` def _invoke(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], model_parameters: dict, tools: Optional[list[PromptMessageTool]] = None, stop: Optional[List[str]] = None, stream: bool = True, user: Optional[str] = None) \ -> Union[LLMResult, Generator]: """ Invoke large language model :param model: model name :param credentials: model credentials :param prompt_messages: prompt messages :param model_parameters: model parameters :param tools: tools for tool calling :param stop: stop words :param stream: is stream response :param user: unique user id :return: full response or stream response chunk generator result """ ``` LANGUAGE: APIDOC CODE: ``` Parameters: model (string): Model name credentials (object): Credential information. Parameters defined by `provider_credential_schema` or `model_credential_schema` in provider's YAML config (e.g., `api_key`). prompt_messages (array[[PromptMessage](#PromptMessage)]): List of prompts. If the model is of the `Completion` type, the list only needs to include one [UserPromptMessage](#UserPromptMessage) element; If the model is of the `Chat` type, it requires a list of elements such as [SystemPromptMessage](#SystemPromptMessage), [UserPromptMessage](#UserPromptMessage), [AssistantPromptMessage](#AssistantPromptMessage), [ToolPromptMessage](#ToolPromptMessage) depending on the message. model_parameters (object): Model parameters defined by the `parameter_rules` in the model's YAML configuration. tools (array[[PromptMessageTool](#PromptMessageTool)]) [optional]: List of tools, equivalent to the `function` in `function calling`. stop (array[string]) [optional]: Stop sequences. The model output will stop before the string defined by the stop sequence. stream (bool): Whether to output in a streaming manner, default is True. Streaming output returns Generator[[LLMResultChunk](#LLMResultChunk)], non-streaming output returns [LLMResult](#LLMResult)]. user (string) [optional]: Unique identifier of the user. This can help the provider monitor and detect abusive behavior. Returns: Streaming output returns Generator[[LLMResultChunk](#LLMResultChunk)], non-streaming output returns [LLMResult](#LLMResult)]. ``` ---------------------------------------- TITLE: Python: Implement Simplified Model Provider Credential Validation DESCRIPTION: For custom model providers that do not require specific credential validation logic, the `validate_provider_credentials` method can be implemented as a simple pass-through. This example demonstrates a minimal implementation for a `XinferenceProvider`. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-en/0412-model-schema.mdx#_snippet_1 LANGUAGE: python CODE: ``` class XinferenceProvider(Provider): def validate_provider_credentials(self, credentials: dict) -> None: pass ``` ---------------------------------------- TITLE: LLM Invocation Implementation Example (Stream/Sync) DESCRIPTION: Illustrates how to implement the `_invoke` method to handle both streaming and synchronous responses by delegating to separate helper functions. This example uses simplified parameters to demonstrate the `yield` keyword for streaming. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-zh/9243-customizable-model.mdx#_snippet_1 LANGUAGE: Python CODE: ``` def _invoke(self, stream: bool, **kwargs) \ -> Union[LLMResult, Generator]: if stream: return self._handle_stream_response(**kwargs) return self._handle_sync_response(**kwargs) def _handle_stream_response(self, **kwargs) -> Generator: for chunk in response: yield chunk def _handle_sync_response(self, **kwargs) -> LLMResult: return LLMResult(**response) ``` ---------------------------------------- TITLE: Python Dify Plugin Endpoint Implementation Example DESCRIPTION: This Python code demonstrates how to implement a Dify plugin endpoint. It shows a class 'Duck' inheriting from 'dify_plugin.Endpoint' and implementing the '_invoke' method. The method accesses the Werkzeug 'Request' object, path 'values', and 'settings', returning a 'Response' object, supporting streaming. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/plugins/schema-definition/endpoint.mdx#_snippet_2 LANGUAGE: python CODE: ``` import json from typing import Mapping from werkzeug import Request, Response from dify_plugin import Endpoint class Duck(Endpoint): def _invoke(self, r: Request, values: Mapping, settings: Mapping) -> Response: """ Invokes the endpoint with the given request. """ app_id = values["app_id"] def generator(): yield f"{app_id} <br>" return Response(generator(), status=200, content_type="text/html") ``` ---------------------------------------- TITLE: Running Mintlify Development Server (Shell) DESCRIPTION: This command starts a local development server for your documentation. It should be executed at the root of your documentation repository, where the `docs.json` configuration file is located, to preview changes in real-time. SOURCE: https://github.com/langgenius/dify-docs/blob/main/quickstart.mdx#_snippet_1 LANGUAGE: Shell CODE: ``` mintlify dev ``` ---------------------------------------- TITLE: Handle Stream and Synchronous LLM Responses DESCRIPTION: Demonstrates how to separate stream and synchronous response handling within the `_invoke` method. Python's `yield` keyword makes functions generators, requiring distinct implementations for streaming and non-streaming returns. SOURCE: https://github.com/langgenius/dify-docs/blob/main/zh-hans/guides/model-configuration/predefined-model.mdx#_snippet_1 LANGUAGE: python CODE: ``` def _invoke(self, stream: bool, **kwargs) \ -> Union[LLMResult, Generator]: if stream: return self._handle_stream_response(**kwargs) return self._handle_sync_response(**kwargs) def _handle_stream_response(self, **kwargs) -> Generator: for chunk in response: yield chunk def _handle_sync_response(self, **kwargs) -> LLMResult: return LLMResult(**response) ``` ---------------------------------------- TITLE: Simple Implementation for Custom Model Provider Credentials Validation DESCRIPTION: While predefined models must fully implement the credential validation interface, custom model providers can implement it simply by passing, as shown in this example for a XinferenceProvider. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/plugins/schema-definition/model/model-schema.mdx#_snippet_1 LANGUAGE: python CODE: ``` class XinferenceProvider(Provider): def validate_provider_credentials(self, credentials: dict) -> None: pass ``` ---------------------------------------- TITLE: Dify LLM Plugin `_invoke` Method Implementation Pattern DESCRIPTION: This snippet illustrates the recommended implementation pattern for the `_invoke` method in a Dify LLM plugin. It shows how to conditionally call internal helper methods (`_invoke_stream` or `_invoke_sync`) based on the `stream` parameter and includes basic error handling for API calls. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-en/0222-creating-new-model-provider-extra.mdx#_snippet_4 LANGUAGE: python CODE: ``` def _invoke(self, ..., stream: bool = True, ...) -> Union[LLMResult, Generator[LLMResultChunk, None, None]]: # Prepare API request parameters (authentication, model parameter conversion, message format conversion, etc.) api_params = self._prepare_api_params(credentials, model_parameters, prompt_messages, tools, stop) try: if stream: return self._invoke_stream(model, api_params, user) else: return self._invoke_sync(model, api_params, user) except vendor_sdk.APIError as e: # Handle API errors, map to Dify errors (reference _invoke_error_mapping) # ... raise mapped_error ... pass # Replace with actual error handling except Exception as e: logger.exception("Unknown error during model invocation") raise e # Or raise a generic InvokeError def _invoke_stream(self, model: str, api_params: dict, user: Optional[str]) -> Generator[LLMResultChunk, None, None]: # Call the vendor_sdk's streaming interface # for api_chunk in vendor_sdk.create_stream(...): # # Convert api_chunk to LLMResultChunk # dify_chunk = self._convert_api_chunk_to_llm_result_chunk(api_chunk) # yield dify_chunk pass # Replace with actual implementation def _invoke_sync(self, model: str, api_params: dict, user: Optional[str]) -> LLMResult: # Call the vendor_sdk's synchronous interface # api_response = vendor_sdk.create_sync(...) # Convert api_response to LLMResult (including message.content, tools, usage) # dify_result = self._convert_api_response_to_llm_result(api_response) # return dify_result pass # Replace with actual implementation ``` ---------------------------------------- TITLE: Python: Implement Provider Credential Validation DESCRIPTION: Providers must inherit from `__base.model_provider.ModelProvider` and implement `validate_provider_credentials`. This method validates the provider's credentials, which are defined by `provider_credential_schema`. If validation fails, a `CredentialsValidateFailedError` should be raised. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/guides/model-configuration/interfaces.mdx#_snippet_0 LANGUAGE: python CODE: ``` def validate_provider_credentials(self, credentials: dict) -> None: """ Validate provider credentials You can choose any validate_credentials method of model type or implement validate method by yourself, such as: get model list api if validate failed, raise exception :param credentials: provider credentials, credentials form defined in `provider_credential_schema`. """ ``` ---------------------------------------- TITLE: Running Mintlify Development Server DESCRIPTION: Execute this command from your documentation directory (where 'docs.json' is located) to start the local development server. It makes a live preview of your documentation available, typically at 'http://localhost:3000'. SOURCE: https://github.com/langgenius/dify-docs/blob/main/development.mdx#_snippet_2 LANGUAGE: bash CODE: ``` mintlify dev ``` ---------------------------------------- TITLE: Implement Anthropic Model Provider Class in Python DESCRIPTION: This Python code defines the `AnthropicProvider` class, which inherits from `ModelProvider`. It implements the `validate_provider_credentials` method to verify Anthropic API keys by attempting to get a model instance and validate credentials, raising an exception if validation fails. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/plugins/quick-start/develop-plugins/model-plugin/create-model-providers.mdx#_snippet_3 LANGUAGE: python CODE: ``` import logging from dify_plugin.entities.model import ModelType from dify_plugin.errors.model import CredentialsValidateFailedError from dify_plugin import ModelProvider logger = logging.getLogger(__name__) class AnthropicProvider(ModelProvider): def validate_provider_credentials(self, credentials: dict) -> None: """ Validate provider credentials if validate failed, raise exception :param credentials: provider credentials, credentials form defined in `provider_credential_schema`. """ try: model_instance = self.get_model_instance(ModelType.LLM) model_instance.validate_credentials(model="claude-3-opus-20240229", credentials=credentials) except CredentialsValidateFailedError as ex: raise ex except Exception as ex: logger.exception(f"{self.get_provider_schema().provider} credentials validate failed") raise ex ``` ---------------------------------------- TITLE: Example Directory Structure for Dify Model Provider Plugin DESCRIPTION: This example illustrates the recommended directory structure for a Dify model provider plugin, demonstrating the organization for different model types (LLM, Text Embedding) and their respective implementation and configuration files within the `models/my_provider/` path. It highlights the separation of model type directories, implementation files (e.g., `llm.py`), and configuration files (e.g., `my-llm-model-v1.yaml`). This structure ensures maintainability and scalability for new model integrations. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-en/0222-creating-new-model-provider-extra.mdx#_snippet_0 LANGUAGE: bash CODE: ``` models/my_provider/ ├── models # Model implementation and configuration directory │ ├── llm # LLM type │ │ ├── _position.yaml (Optional, controls sorting) │ │ ├── my-llm-model-v1.yaml │ │ ├── my-llm-model-v2.yaml │ │ └── llm.py # LLM implementation logic │ └── text_embedding # Embedding type │ ├── _position.yaml (Optional, controls sorting) │ ├── my-embedding-model.yaml │ └── text_embedding.py # Embedding implementation logic ├── provider # Provider-level code directory │ └── my_provider.py (For credential validation, etc., refer to "Creating Model Provider" document) └── manifest.yaml # Plugin manifest file ``` ---------------------------------------- TITLE: Implement LLM Invocation Method (_invoke) DESCRIPTION: This method is the core of LLM interaction, supporting both streaming and synchronous responses. It takes model, credentials, prompt messages, and parameters as input, along with optional tools, stop words, and user ID. It returns either a `LLMResult` or a `Generator`. SOURCE: https://github.com/langgenius/dify-docs/blob/main/zh-hans/guides/model-configuration/predefined-model.mdx#_snippet_0 LANGUAGE: python CODE: ``` def _invoke(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], model_parameters: dict, tools: Optional[list[PromptMessageTool]] = None, stop: Optional[List[str]] = None, stream: bool = True, user: Optional[str] = None) \ -> Union[LLMResult, Generator]: """ Invoke large language model :param model: model name :param credentials: model credentials :param prompt_messages: prompt messages :param model_parameters: model parameters :param tools: tools for tool calling :param stop: stop words :param stream: is stream response :param user: unique user id :return: full response or stream response chunk generator result """ ``` ---------------------------------------- TITLE: Python: Example Implementation for Sync and Stream LLM Responses DESCRIPTION: This example illustrates how to implement the `_invoke` method to conditionally handle synchronous and streaming responses. It demonstrates the use of separate helper functions, `_handle_stream_response` and `_handle_sync_response`, to manage the distinct return types required for generator-based streaming and direct result returns. SOURCE: https://github.com/langgenius/dify-docs/blob/main/zh-hans/plugins/quick-start/develop-plugins/model-plugin/customizable-model.mdx#_snippet_1 LANGUAGE: python CODE: ``` def _invoke(self, stream: bool, **kwargs) \ -> Union[LLMResult, Generator]: if stream: return self._handle_stream_response(**kwargs) return self._handle_sync_response(**kwargs) def _handle_stream_response(self, **kwargs) -> Generator: for chunk in response: yield chunk def _handle_sync_response(self, **kwargs) -> LLMResult: return LLMResult(**response) ``` ---------------------------------------- TITLE: Implementing XinferenceProvider Class in Python DESCRIPTION: This Python snippet demonstrates the minimal implementation for a custom model provider like Xinference. It defines the `XinferenceProvider` class inheriting from `Provider` and includes an empty `validate_provider_credentials` method, which is necessary to satisfy the abstract parent class's requirements and prevent instantiation errors. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/plugins/quick-start/develop-plugins/model-plugin/create-model-providers.mdx#_snippet_6 LANGUAGE: python CODE: ``` class XinferenceProvider(Provider): def validate_provider_credentials(self, credentials: dict) -> None: pass ``` ---------------------------------------- TITLE: Implement Dify Plugin Endpoint _invoke Method (Python) DESCRIPTION: This Python code demonstrates the implementation of the `_invoke` method within a subclass of `dify_plugin.Endpoint`. It shows how to access path parameters (`values`), handle the `Request` object, and return a `Response` object, supporting streaming responses. Direct string returns are not supported. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-en/0432-endpoint.mdx#_snippet_2 LANGUAGE: python CODE: ``` import json from typing import Mapping from werkzeug import Request, Response from dify_plugin import Endpoint class Duck(Endpoint): def _invoke(self, r: Request, values: Mapping, settings: Mapping) -> Response: """ Invokes the endpoint with the given request. """ app_id = values["app_id"] def generator(): yield f"{app_id} <br>" return Response(generator(), status=200, content_type="text/html") ``` ---------------------------------------- TITLE: Implement Text Embedding Model Invocation in Python DESCRIPTION: Details the implementation of the `_invoke` method for text embedding models, inheriting from `TextEmbeddingModel`. It explains how to pass model name, credentials, and a list of texts for batch processing to get embedding results. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/guides/model-configuration/interfaces.mdx#_snippet_6 LANGUAGE: python CODE: ``` def _invoke(self, model: str, credentials: dict, texts: list[str], user: Optional[str] = None) \ -> TextEmbeddingResult: """ Invoke large language model :param model: model name :param credentials: model credentials :param texts: texts to embed :param user: unique user id :return: embeddings result """ ``` LANGUAGE: APIDOC CODE: ``` Method: _invoke Parameters: model (string): Model name credentials (object): Credential information. The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. texts (array[string]): List of texts, capable of batch processing user (string) [optional]: Unique identifier of the user. This can help the provider monitor and detect abusive behavior. Returns: TextEmbeddingResult entity. ``` ---------------------------------------- TITLE: Xinference Credential Schema: Server URL and Model UID (YAML) DESCRIPTION: This YAML snippet defines the `server_url` and `model_uid` input fields within the `provider_credential_schema` for Xinference. These required text inputs enable users to specify the local deployment address of their Xinference server and the unique identifier of the model, facilitating connection to the self-hosted instance. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/plugins/quick-start/develop-plugins/model-plugin/customizable-model.mdx#_snippet_3 LANGUAGE: yaml CODE: ``` - variable: server_url label: zh_Hans: 服务器 URL en_US: Server url type: text-input required: true placeholder: zh_Hans: 在此输入 Xinference 的服务器地址,如 https://example.com/xxx en_US: Enter the url of your Xinference, for example https://example.com/xxx - variable: model_uid label: zh_Hans: 模型 UID en_US: Model uid type: text-input required: true placeholder: zh_Hans: 在此输入您的 Model UID en_US: Enter the model uid ``` ---------------------------------------- TITLE: Python: Implement Model Invocation Error Mapping DESCRIPTION: Models must implement the `_invoke_error_mapping` property to map internal model invocation exceptions to unified `InvokeError` types defined by Runtime. This ensures Dify can handle errors consistently, such as connection, server, rate limit, authorization, or bad request errors. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/guides/model-configuration/interfaces.mdx#_snippet_2 LANGUAGE: python CODE: ``` @property def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: """ Map model invoke error to unified error The key is the error type thrown to the caller The value is the error type thrown by the model, which needs to be converted into a unified error type for the caller. :return: Invoke error mapping """ ``` ---------------------------------------- TITLE: Implement LLM Invocation with Streaming and Synchronous Returns DESCRIPTION: This method defines the core logic for invoking a Large Language Model, supporting both streaming and synchronous response types. It handles various parameters including model name, credentials, prompt messages, model parameters, tools for function calling, stop sequences, streaming preference, and user identification. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/guides/model-configuration/interfaces.mdx#_snippet_3 LANGUAGE: python CODE: ``` def _invoke(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], model_parameters: dict, tools: Optional[list[PromptMessageTool]] = None, stop: Optional[List[str]] = None, stream: bool = True, user: Optional[str] = None) \ -> Union[LLMResult, Generator]: """ Invoke large language model :param model: model name :param credentials: model credentials :param prompt_messages: prompt messages :param model_parameters: model parameters :param tools: tools for tool calling :param stop: stop words :param stream: is stream response :param user: unique user id :return: full response or stream response chunk generator result """ ``` LANGUAGE: APIDOC CODE: ``` _invoke(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], model_parameters: dict, tools: Optional[list[PromptMessageTool]] = None, stop: Optional[List[str]] = None, stream: bool = True, user: Optional[str] = None) - Parameters: - model (string): Model name - credentials (object): Credential information. Parameters defined by provider_credential_schema or model_credential_schema in provider's YAML. Inputs such as api_key are included. - prompt_messages (array[[PromptMessage]]): List of prompts. If Completion type, list needs one UserPromptMessage. If Chat type, requires list of SystemPromptMessage, UserPromptMessage, AssistantPromptMessage, ToolPromptMessage depending on message. - model_parameters (object): Model parameters. Defined by parameter_rules in the model's YAML configuration. - tools (array[[PromptMessageTool]]): [optional] List of tools, equivalent to the function in function calling. That is, the tool list for tool calling. - stop (array[string]): [optional] Stop sequences. The model output will stop before the string defined by the stop sequence. - stream (bool): Whether to output in a streaming manner, default is True. Streaming output returns Generator[LLMResultChunk], non-streaming output returns LLMResult. - user (string): [optional] Unique identifier of the user. This can help the provider monitor and detect abusive behavior. - Returns: Streaming output returns Generator[LLMResultChunk], non-streaming output returns LLMResult. ``` ---------------------------------------- TITLE: Configure Dify Plugin Remote Debugging Environment DESCRIPTION: This snippet shows how to configure the `.env` file in a local Dify plugin project for remote debugging. It specifies the installation method as 'remote' and provides placeholders for the Dify server address, debug port, and the unique debug key obtained from the Dify instance. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-en/0222-creating-new-model-provider-extra.mdx#_snippet_7 LANGUAGE: dotenv CODE: ``` INSTALL_METHOD=remote REMOTE_INSTALL_HOST=<your-dify-domain-or-ip> # Dify server address REMOTE_INSTALL_PORT=5003 # Debug port REMOTE_INSTALL_KEY=****-****-****-****-**** # Your Debug Key ``` ---------------------------------------- TITLE: Install and Verify Node.js DESCRIPTION: Installs Node.js using Homebrew, which is required to run the TEN Playground frontend locally, and then verifies the installation by checking its version. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/learn-more/use-cases/build-ai-sales-avatar.mdx#_snippet_2 LANGUAGE: bash CODE: ``` brew install node ``` LANGUAGE: bash CODE: ``` node -v ``` ---------------------------------------- TITLE: OpenAI Application Error: Internal Server Error DESCRIPTION: This error suggests an internal server issue with OpenAI or a problem within the application itself, possibly due to server overload. It advises retrying the request. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/learn-more/faq/use-llms-faq.mdx#_snippet_1 LANGUAGE: JSON CODE: ``` The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application ``` ---------------------------------------- TITLE: Initialize Dify Plugin Project on macOS ARM64 DESCRIPTION: Command to initialize a new Dify plugin project using the scaffolding tool, specifically for macOS ARM64 architecture. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-en/0222-creating-new-model-provider.mdx#_snippet_0 LANGUAGE: bash CODE: ``` ./dify-plugin-darwin-arm64 plugin init ``` ---------------------------------------- TITLE: Troubleshooting Internal Server Error in Dify App (Japanese) DESCRIPTION: This snippet displays a generic internal server error message encountered when interacting with the Dify application, suggesting either server overload or an application-specific error. SOURCE: https://github.com/langgenius/dify-docs/blob/main/ja-jp/learn-more/faq/llms-use-faq.mdx#_snippet_1 LANGUAGE: JSON CODE: ``` The server encountered an internal error and was unable to complete your request。Either the server is overloaded or there is an error in the application ``` ---------------------------------------- TITLE: Implement LLM Streaming and Synchronous Response Handling DESCRIPTION: This example demonstrates how to separate streaming and synchronous response logic within the `_invoke` method. It uses `_handle_stream_response` for generator-based streaming and `_handle_sync_response` for complete, synchronous results, ensuring clear logic and accommodating different return requirements. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/plugins/quick-start/develop-plugins/model-plugin/predefined-model.mdx#_snippet_3 LANGUAGE: python CODE: ``` def _invoke(self, stream: bool, **kwargs) -> Union[LLMResult, Generator]: """Call the corresponding processing function based on return type.""" if stream: return self._handle_stream_response(**kwargs) return self._handle_sync_response(**kwargs) def _handle_stream_response(self, **kwargs) -> Generator: """Handle streaming response logic.""" for chunk in response: # Assume response is a streaming data iterator yield chunk def _handle_sync_response(self, **kwargs) -> LLMResult: """Handle synchronous response logic.""" return LLMResult(**response) # Assume response is a complete response dictionary ``` ---------------------------------------- TITLE: Clone LocalAI Repository and Navigate to Example Directory DESCRIPTION: Clones the LocalAI source code repository from GitHub and navigates into the `examples/langchain-chroma` directory, which contains setup files relevant for this integration. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/development/models-integration/localai.mdx#_snippet_0 LANGUAGE: bash CODE: ``` $ git clone https://github.com/go-skynet/LocalAI $ cd LocalAI/examples/langchain-chroma ``` ---------------------------------------- TITLE: Dify Server Operational Environment Variables DESCRIPTION: Configures server startup mode, debugging, security, deployment environment, logging, database migration, update checks, and API timeouts for the Dify backend. These settings are crucial for controlling the server's behavior and performance. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/getting-started/install-self-hosted/environments.mdx#_snippet_1 LANGUAGE: APIDOC CODE: ``` MODE: string description: Startup mode, only available when launched using Docker. Not applicable when running from source code. values: - api: Start API Server. - worker: Start asynchronous queue worker. DEBUG: boolean description: Debug mode. Disabled by default. Recommended to enable during local development to prevent issues caused by monkey patching. FLASK_DEBUG: boolean description: Flask debug mode. When enabled, outputs trace information in API responses for easier debugging. SECRET_KEY: string description: A secret key used for securely signing session cookies and encrypting sensitive information in the database. Must be set before the first launch. DEPLOY_ENV: string description: Deployment environment. values: - PRODUCTION (default): Production environment. - TESTING: Testing environment, indicated by a distinct color label on the front-end page. LOG_LEVEL: string description: The log output level. Default is INFO. Recommended to set to ERROR for production environments. MIGRATION_ENABLED: boolean description: When true, database migrations are automatically executed on container startup. Only available when launched using Docker. Not applicable when running from source code. CHECK_UPDATE_URL: string | boolean description: Controls the version checking policy. If set to false, the system will not call https://updates.dify.ai to check for updates. Setting to an empty value disables this API call. TEXT_GENERATION_TIMEOUT_MS: number description: Default value: 60000 (milliseconds). Specifies the timeout for text generation and workflow processes. Prevents system-wide service disruptions caused by individual processes exceeding their allocated time. OPENAI_API_BASE: string description: Used to change the OpenAI base address. Default is https://api.openai.com/v1. Can be replaced with a domestic mirror address or a local model providing OpenAI compatible API. ``` ---------------------------------------- TITLE: Implement LLM Invocation with Streaming and Synchronous Responses DESCRIPTION: This method defines the core logic for invoking a large language model, supporting both streaming and synchronous response modes. It handles various parameters such as model name, credentials, prompt messages, model parameters, tools for function calling, stop sequences, and user identification. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-en/0412-model-schema.mdx#_snippet_4 LANGUAGE: python CODE: ``` def _invoke(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], model_parameters: dict, tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None, stream: bool = True, user: Optional[str] = None) \ -> Union[LLMResult, Generator]: """ Invoke large language model :param model: model name :param credentials: model credentials :param prompt_messages: prompt messages :param model_parameters: model parameters :param tools: tools for tool calling :param stop: stop words :param stream: is stream response :param user: unique user id :return: full response or stream response chunk generator result """ ``` LANGUAGE: APIDOC CODE: ``` LLM Invocation Method: _invoke(model: str, credentials: dict, prompt_messages: list[PromptMessage], model_parameters: dict, tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None, stream: bool = True, user: Optional[str] = None) Description: Invoke large language model. Parameters: model (string): Model name. credentials (object): Credential information. Defined by the provider YAML configuration file's `provider_credential_schema` or `model_credential_schema`, passed in as `api_key`, etc. prompt_messages (array[PromptMessage]): Prompt list. If the model is of `Completion` type, the list only needs to include one UserPromptMessage element; if the model is of `Chat` type, different messages need to be passed in as a list of SystemPromptMessage, UserPromptMessage, AssistantPromptMessage, ToolPromptMessage elements. model_parameters (object): Model parameters defined by the model YAML configuration's `parameter_rules`. tools (array[PromptMessageTool]) [optional]: Tool list, equivalent to `function` in `function calling`. This is the tool list passed to tool calling. stop (array[string]) [optional]: Stop sequence. The model response will stop output before the string defined in the stop sequence. stream (bool): Whether to stream output, default is True. For streaming output, it returns Generator[LLMResultChunk], for non-streaming output, it returns LLMResult. user (string) [optional]: A unique identifier for the user that can help the provider monitor and detect abuse. Return Value: For streaming output: Generator[LLMResultChunk] For non-streaming output: LLMResult ``` ---------------------------------------- TITLE: Dify Plugin Endpoint Python Implementation Requirements DESCRIPTION: Outlines the requirements for implementing a Dify plugin endpoint in Python. It details that the implementation must be a subclass of 'dify_plugin.Endpoint' and specifies the input parameters ('r', 'values', 'settings') and return type ('Response') for the '_invoke' method. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/plugins/schema-definition/endpoint.mdx#_snippet_5 LANGUAGE: APIDOC CODE: ``` Endpoint Implementation: Base Class: dify_plugin.Endpoint Description: All endpoint implementations must inherit from this class. Method: _invoke(self, r: Request, values: Mapping, settings: Mapping) -> Response Description: The core method to implement the endpoint's logic. Parameters: r: Request (from werkzeug) Description: The Werkzeug request object. values: Mapping Description: A dictionary of path parameters parsed from the URL. settings: Mapping Description: A dictionary of configuration information for this specific Endpoint, derived from the group settings. Return: Response (from werkzeug) Description: The method must return a Werkzeug Response object. Streaming returns are supported. Direct string returns are not supported. ``` ---------------------------------------- TITLE: Implementing AWS Bedrock Knowledge Retrieval Service (Python) DESCRIPTION: This static method 'knowledge_retrieval' interacts with the AWS Bedrock Agent Runtime client. It initializes the 'boto3' client with provided AWS credentials and region, then calls the 'retrieve' method on the specified 'knowledgeBaseId' with a given 'query' and 'retrievalConfiguration'. It parses the Bedrock response, filters results based on a 'score_threshold', and formats them into a list of records containing metadata, score, title, and content. SOURCE: https://github.com/langgenius/dify-docs/blob/main/zh-hans/learn-more/use-cases/how-to-connect-aws-bedrock.mdx#_snippet_1 LANGUAGE: Python CODE: ``` import boto3 class ExternalDatasetService: @staticmethod def knowledge_retrieval(retrieval_setting: dict, query: str, knowledge_id: str): # get bedrock client client = boto3.client( "bedrock-agent-runtime", aws_secret_access_key="AWS_SECRET_ACCESS_KEY", aws_access_key_id="AWS_ACCESS_KEY_ID", # example: us-east-1 region_name="AWS_REGION_NAME", ) # fetch external knowledge retrieval response = client.retrieve( knowledgeBaseId=knowledge_id, retrievalConfiguration={ "vectorSearchConfiguration": {"numberOfResults": retrieval_setting.get("top_k"), "overrideSearchType": "HYBRID"} }, retrievalQuery={"text": query}, ) # parse response results = [] if response.get("ResponseMetadata") and response.get("ResponseMetadata").get("HTTPStatusCode") == 200: if response.get("retrievalResults"): retrieval_results = response.get("retrievalResults") for retrieval_result in retrieval_results: # filter out results with score less than threshold if retrieval_result.get("score") < retrieval_setting.get("score_threshold", .0): continue result = { "metadata": retrieval_result.get("metadata"), "score": retrieval_result.get("score"), "title": retrieval_result.get("metadata").get("x-amz-bedrock-kb-source-uri"), "content": retrieval_result.get("content").get("text"), } results.append(result) return { "records": results } ``` ---------------------------------------- TITLE: Orchestrate Chatflow for Image File Processing with Vision LLM DESCRIPTION: Steps to configure Chatflow to process image files directly using LLMs that support the VISION feature. No additional nodes are required for image processing. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/guides/workflow/additional-features.mdx#_snippet_1 LANGUAGE: APIDOC CODE: ``` 1. Enable the Features function and only check "Images" in the file types. 2. Add an LLM node, enable the VISION feature, and select the `sys.files` variable. 3. Add a "Answer" node at the end, filling in the output variable of the LLM node. ``` ---------------------------------------- TITLE: Python: Implement Model Credential Verification DESCRIPTION: All Dify models must implement the `validate_credentials` method to verify model-specific credentials. This method takes the model name and a dictionary of credentials, raising `errors.validate.CredentialsValidateFailedError` if verification fails. Credential parameters are defined by provider or model YAML schemas. SOURCE: https://github.com/langgenius/dify-docs/blob/main/ja-jp/guides/model-configuration/interfaces.mdx#_snippet_0 LANGUAGE: python CODE: ``` def validate_credentials(self, model: str, credentials: dict) -> None: """ Validate model credentials :param model: model name :param credentials: model credentials :return: """ ``` LANGUAGE: APIDOC CODE: ``` validate_credentials(self, model: str, credentials: dict) -> None Parameters: model (string): Model name credentials (object): Credential information. The parameters are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. Errors: errors.validate.CredentialsValidateFailedError: Thrown if verification fails. ``` ---------------------------------------- TITLE: LLMResultChunk Class Definition DESCRIPTION: Defines the LLMResultChunk class, representing each complete entity returned in streaming LLM responses. It includes the model used, prompt messages, system fingerprint, and the delta information for the current chunk. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/guides/model-configuration/interfaces.mdx#_snippet_25 LANGUAGE: python CODE: ``` class LLMResultChunk(BaseModel): """ Model class for llm result chunk. """ model: str # Actual used modele prompt_messages: list[PromptMessage] # prompt messages system_fingerprint: Optional[str] = None # request fingerprint, refer to OpenAI definition delta: LLMResultChunkDelta ``` ---------------------------------------- TITLE: Implement Speech2Text Model Invocation in Python DESCRIPTION: This snippet shows the implementation of the `_invoke` method for a Speech2Text model. It processes an audio file and returns the converted text string. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/plugins/schema-definition/model/model-schema.mdx#_snippet_10 LANGUAGE: Python CODE: ``` def _invoke(self, model: str, credentials: dict, file: IO[bytes], user: Optional[str] = None) \ -> str: """ Invoke large language model :param model: model name :param credentials: model credentials :param file: audio file :param user: unique user id :return: text for given audio file """ ``` LANGUAGE: APIDOC CODE: ``` Invoke Invocation: _invoke(model: str, credentials: dict, file: IO[bytes], user: Optional[str] = None) Parameters: model (string): Model name credentials (object): Credential information file (File): File stream user (string) [optional]: Unique user identifier to help providers monitor and detect abuse Return: Converted text string from speech ``` ---------------------------------------- TITLE: LLMResultChunkDelta Class Definition DESCRIPTION: Defines the LLMResultChunkDelta class, representing the incremental 'delta' entity returned in each iteration during streaming LLM responses. It contains the response message, optional usage info, and the finish reason (only in the last chunk). SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/guides/model-configuration/interfaces.mdx#_snippet_24 LANGUAGE: python CODE: ``` class LLMResultChunkDelta(BaseModel): """ Model class for llm result chunk delta. """ index: int message: AssistantPromptMessage # response message usage: Optional[LLMUsage] = None # usage info finish_reason: Optional[str] = None # finish reason, only the last one returns ``` ---------------------------------------- TITLE: Python: Implement Model Invocation Error Mapping DESCRIPTION: Models are required to implement the `_invoke_error_mapping` property to map internal model invocation exceptions to standardized Dify `InvokeError` types. This ensures consistent error handling within the Runtime. The mapping dictionary's keys are Dify's unified error types, and values are lists of model-specific exceptions. SOURCE: https://github.com/langgenius/dify-docs/blob/main/ja-jp/guides/model-configuration/interfaces.mdx#_snippet_1 LANGUAGE: python CODE: ``` @property def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: """ Map model invoke error to unified error The key is the error type thrown to the caller The value is the error type thrown by the model, which needs to be converted into a unified error type for the caller. :return: Invoke error mapping """ ``` LANGUAGE: APIDOC CODE: ``` _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]] Purpose: Map model invoke error to unified error. Key: The error type thrown to the caller (unified InvokeError type). Value: The error type thrown by the model, which needs to be converted into a unified error type for the caller. Runtime Errors (Unified Types): InvokeConnectionError: Invocation connection error InvokeServerUnavailableError: Invocation service provider unavailable InvokeRateLimitError: Invocation reached rate limit InvokeAuthorizationError: Invocation authorization failure InvokeBadRequestError: Invocation parameter error ``` ---------------------------------------- TITLE: Package Dify Plugin for Publishing DESCRIPTION: This command packages your Dify plugin into a `.difypkg` file, preparing it for distribution. It should be run in the plugin project's root directory, replacing `<provider_name>` with your specific provider directory name. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-en/0222-creating-new-model-provider-extra.mdx#_snippet_9 LANGUAGE: bash CODE: ``` dify plugin package models/<provider_name> ``` ---------------------------------------- TITLE: MYSCALE_PORT Configuration DESCRIPTION: Configures the port for MyScale database. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/getting-started/install-self-hosted/environments.mdx#_snippet_24 LANGUAGE: APIDOC CODE: ``` MYSCALE_PORT: Description: MyScale port configuration. ``` ---------------------------------------- TITLE: Implement Model Credential Validation DESCRIPTION: This method is used to validate credentials specific to a single model, similar to how supplier credentials are validated. SOURCE: https://github.com/langgenius/dify-docs/blob/main/zh-hans/guides/model-configuration/customizable-model.mdx#_snippet_2 LANGUAGE: python CODE: ``` def validate_credentials(self, model: str, credentials: dict) -> None: """ Validate model credentials :param model: model name :param credentials: model credentials :return: """ ``` ---------------------------------------- TITLE: LLMResultChunk Class Definition DESCRIPTION: Represents each iteration entity in a streaming LLM response. It includes the model used, prompt messages, system fingerprint, and the `delta` content that changes in each iteration. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-zh/0412-model-schema.mdx#_snippet_15 LANGUAGE: python CODE: ``` class LLMResultChunk(BaseModel): """ Model class for llm result chunk. """ model: str # 实际使用模型 prompt_messages: list[PromptMessage] # prompt 消息列表 system_fingerprint: Optional[str] = None # 请求指纹,可参考 OpenAI 该参数定义 delta: LLMResultChunkDelta # 每个迭代存在变化的内容 ``` LANGUAGE: APIDOC CODE: ``` LLMResultChunk (BaseModel) Description: Each iteration entity in a streaming return. Properties: model: str Description: Actual model used. prompt_messages: list[PromptMessage] Description: List of prompt messages. system_fingerprint: Optional[str] = None Description: Request fingerprint, refer to OpenAI's definition for this parameter. delta: LLMResultChunkDelta Description: Content that changes in each iteration. ``` ---------------------------------------- TITLE: Speech2text: Invoke Method Implementation DESCRIPTION: Implements the `_invoke` method for speech-to-text models, converting an audio file stream into text. It requires the model name, credentials, and an optional user ID. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-en/0412-model-schema.mdx#_snippet_10 LANGUAGE: python CODE: ``` def _invoke(self, model: str, credentials: dict, file: IO[bytes], user: Optional[str] = None) \ -> str: """ Invoke large language model :param model: model name :param credentials: model credentials :param file: audio file :param user: unique user id :return: text for given audio file """ ``` LANGUAGE: APIDOC CODE: ``` Speech2TextModel._invoke(model: str, credentials: dict, file: File, user: Optional[str] = None) -> str model (string): Model name credentials (object): Credential information. Defined by provider YAML's `provider_credential_schema` or `model_credential_schema`, passed in as `api_key`, etc. file (File): Audio file stream. user (string) [optional]: A unique identifier for the user, can help the provider monitor and detect abuse. Returns: String after speech conversion. ``` ---------------------------------------- TITLE: Rerank: Invoke Method Implementation DESCRIPTION: Implements the `_invoke` method for reranking models, taking a query and a list of documents to reorder based on relevance. It supports optional score thresholds and top-N results. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-en/0412-model-schema.mdx#_snippet_9 LANGUAGE: python CODE: ``` def _invoke(self, model: str, credentials: dict, query: str, docs: list[str], score_threshold: Optional[float] = None, top_n: Optional[int] = None, user: Optional[str] = None) \ -> RerankResult: """ Invoke rerank model :param model: model name :param credentials: model credentials :param query: search query :param docs: docs for reranking :param score_threshold: score threshold :param top_n: top n :param user: unique user id :return: rerank result """ ``` LANGUAGE: APIDOC CODE: ``` RerankModel._invoke(model: str, credentials: dict, query: str, docs: list[str], score_threshold: Optional[float] = None, top_n: Optional[int] = None, user: Optional[str] = None) -> RerankResult model (string): Model name credentials (object): Credential information. Defined by provider YAML's `provider_credential_schema` or `model_credential_schema`, passed in as `api_key`, etc. query (string): Query request content. docs (array[string]): List of segments that need to be reranked. score_threshold (float) [optional]: Score threshold. top_n (int) [optional]: Take the top n segments. user (string) [optional]: A unique identifier for the user, can help the provider monitor and detect abuse. Returns: RerankResult entity. ``` ---------------------------------------- TITLE: TextEmbedding: Invoke Method Implementation DESCRIPTION: Implements the `_invoke` method for text embedding models, processing a list of texts to generate embeddings. It requires model name, credentials, and an optional user ID. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-en/0412-model-schema.mdx#_snippet_7 LANGUAGE: python CODE: ``` def _invoke(self, model: str, credentials: dict, texts: list[str], user: Optional[str] = None) \ -> TextEmbeddingResult: """ Invoke large language model :param model: model name :param credentials: model credentials :param texts: texts to embed :param user: unique user id :return: embeddings result """ ``` LANGUAGE: APIDOC CODE: ``` TextEmbeddingModel._invoke(model: str, credentials: dict, texts: list[str], user: Optional[str] = None) -> TextEmbeddingResult model (string): Model name credentials (object): Credential information. Defined by provider YAML's `provider_credential_schema` or `model_credential_schema`, passed in as `api_key`, etc. texts (array[string]): Text list, can be processed in batch. user (string) [optional]: A unique identifier for the user, can help the provider monitor and detect abuse. Returns: TextEmbeddingResult entity. ``` ---------------------------------------- TITLE: APIDOC: LLMResultChunkDelta Class Definition DESCRIPTION: Defines the LLMResultChunkDelta class, representing the incremental changes (delta) in each iteration during streaming LLM responses. It includes the message delta, optional usage info, and the finish reason for the last chunk. SOURCE: https://github.com/langgenius/dify-docs/blob/main/ja-jp/guides/model-configuration/interfaces.mdx#_snippet_23 LANGUAGE: python CODE: ``` class LLMResultChunkDelta(BaseModel): """ Model class for llm result chunk delta. """ index: int message: AssistantPromptMessage # response message usage: Optional[LLMUsage] = None # usage info finish_reason: Optional[str] = None # finish reason, only the last one returns ``` ---------------------------------------- TITLE: Implement Speech-to-Text Model Invocation in Python DESCRIPTION: Covers the implementation of the `_invoke` method for speech-to-text models, inheriting from `Speech2TextModel`. It specifies parameters like model name, credentials, and an audio file stream for conversion to text. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/guides/model-configuration/interfaces.mdx#_snippet_9 LANGUAGE: python CODE: ``` def _invoke(self, model: str, credentials: dict, file: IO[bytes], user: Optional[str] = None) -> str: """ Invoke large language model :param model: model name :param credentials: model credentials :param file: audio file :param user: unique user id :return: text for given audio file """ ``` LANGUAGE: APIDOC CODE: ``` Method: _invoke Parameters: model (string): Model name credentials (object): Credential information. The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. file (File): File stream user (string) [optional]: Unique identifier of the user. This can help the provider monitor and detect abusive behavior. Returns: string: The string after speech-to-text conversion. ``` ---------------------------------------- TITLE: Implement Rerank Model Invocation in Python DESCRIPTION: Outlines the implementation of the `_invoke` method for rerank models, inheriting from `RerankModel`. It details parameters such as model name, credentials, query, documents, score threshold, and top N for reranking. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/guides/model-configuration/interfaces.mdx#_snippet_8 LANGUAGE: python CODE: ``` def _invoke(self, model: str, credentials: dict, query: str, docs: list[str], score_threshold: Optional[float] = None, top_n: Optional[int] = None, user: Optional[str] = None) \ -> RerankResult: """ Invoke rerank model :param model: model name :param credentials: model credentials :param query: search query :param docs: docs for reranking :param score_threshold: score threshold :param top_n: top n :param user: unique user id :return: rerank result """ ``` LANGUAGE: APIDOC CODE: ``` Method: _invoke Parameters: model (string): Model name credentials (object): Credential information. The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included. query (string): Query request content docs (array[string]): List of segments to be reranked score_threshold (float) [optional]: Score threshold top_n (int) [optional]: Select the top n segments user (string) [optional]: Unique identifier of the user. This can help the provider monitor and detect abusive behavior. Returns: RerankResult entity. ``` ---------------------------------------- TITLE: Initialize Dify Plugin Project with Renamed Binary DESCRIPTION: Alternative command to initialize a new Dify plugin project, assuming the scaffolding tool binary has been renamed to 'dify' and placed in a system's executable path. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-en/0222-creating-new-model-provider.mdx#_snippet_1 LANGUAGE: bash CODE: ``` dify plugin init ``` ---------------------------------------- TITLE: Implement LLM Invoke Method for Xinference DESCRIPTION: This core method handles LLM calls, supporting both streaming and synchronous responses. It requires separate implementations for stream and sync handling due to Python's generator function behavior. Parameters include model name, credentials, prompt messages, model parameters, tools, stop words, stream flag, and user ID. SOURCE: https://github.com/langgenius/dify-docs/blob/main/zh-hans/guides/model-configuration/customizable-model.mdx#_snippet_0 LANGUAGE: python CODE: ``` def _invoke(self, model: str, credentials: dict, prompt_messages: list[PromptMessage], model_parameters: dict, tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None, stream: bool = True, user: Optional[str] = None) \ -> Union[LLMResult, Generator]: """ Invoke large language model :param model: model name :param credentials: model credentials :param prompt_messages: prompt messages :param model_parameters: model parameters :param tools: tools for tool calling :param stop: stop words :param stream: is stream response :param user: unique user id :return: full response or stream response chunk generator result """ ``` LANGUAGE: python CODE: ``` def _invoke(self, stream: bool, **kwargs) \ -> Union[LLMResult, Generator]: if stream: return self._handle_stream_response(**kwargs) return self._handle_sync_response(**kwargs) def _handle_stream_response(self, **kwargs) -> Generator: for chunk in response: yield chunk def _handle_sync_response(self, **kwargs) -> LLMResult: return LLMResult(**response) ``` ---------------------------------------- TITLE: Restarting Mintlify Dev Server After Cache Clear DESCRIPTION: This command restarts the Mintlify development server after clearing the local cache (by deleting the '~/.mintlify' folder). It is a solution for resolving unknown errors that might be caused by corrupted cache data. SOURCE: https://github.com/langgenius/dify-docs/blob/main/development.mdx#_snippet_9 LANGUAGE: bash CODE: ``` mintlify dev ``` ---------------------------------------- TITLE: Prepare Dify CLI Binary Executable for macOS M Chips DESCRIPTION: This snippet demonstrates how to prepare the downloaded Dify Plugin CLI binary executable for use on macOS systems with M chips. It involves granting execution permissions to the file and then renaming it for easier command-line access. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/plugins/quick-start/develop-plugins/initialize-development-tools.mdx#_snippet_3 LANGUAGE: bash CODE: ``` chmod +x ./dify-plugin-darwin-arm64 mv ./dify-plugin-darwin-arm64 ./dify ``` ---------------------------------------- TITLE: Implementing Function Calling Agent Strategy and LLM Invocation in Python DESCRIPTION: This comprehensive snippet demonstrates the full implementation of the `FunctionCallingAgentStrategy`, including parameter initialization, constructing `prompt_messages` (system and user prompts), converting `ToolEntity` objects to `PromptMessageTool` format via `_init_prompt_tools` and `_convert_tool_to_prompt_message_tool`, and finally invoking the LLM using `self.session.model.llm.invoke()`. It details how to handle tool parameters, including type mapping and required fields, for proper LLM interaction. SOURCE: https://github.com/langgenius/dify-docs/blob/main/zh-hans/plugins/schema-definition/agent.mdx#_snippet_5 LANGUAGE: python CODE: ``` from collections.abc import Generator from typing import Any from pydantic import BaseModel from dify_plugin.entities.agent import AgentInvokeMessage from dify_plugin.entities.model.llm import LLMModelConfig from dify_plugin.entities.model.message import ( PromptMessageTool, SystemPromptMessage, UserPromptMessage, ) from dify_plugin.entities.tool import ToolParameter from dify_plugin.interfaces.agent import AgentModelConfig, AgentStrategy, ToolEntity class FunctionCallingParams(BaseModel): query: str instruction: str | None model: AgentModelConfig tools: list[ToolEntity] | None maximum_iterations: int = 3 class FunctionCallingAgentStrategy(AgentStrategy): def _invoke(self, parameters: dict[str, Any]) -> Generator[AgentInvokeMessage]: """ Run FunctionCall agent application """ # init params fc_params = FunctionCallingParams(**parameters) query = fc_params.query model = fc_params.model stop = fc_params.model.completion_params.get("stop", []) if fc_params.model.completion_params else [] prompt_messages = [ SystemPromptMessage(content="your system prompt message"), UserPromptMessage(content=query), ] tools = fc_params.tools prompt_messages_tools = self._init_prompt_tools(tools) # invoke llm chunks = self.session.model.llm.invoke( model_config=LLMModelConfig(**model.model_dump(mode="json")), prompt_messages=prompt_messages, stream=True, stop=stop, tools=prompt_messages_tools, ) def _init_prompt_tools(self, tools: list[ToolEntity] | None) -> list[PromptMessageTool]: """ Init tools """ prompt_messages_tools = [] for tool in tools or []: try: prompt_tool = self._convert_tool_to_prompt_message_tool(tool) except Exception: # api tool may be deleted continue # save prompt tool prompt_messages_tools.append(prompt_tool) return prompt_messages_tools def _convert_tool_to_prompt_message_tool(self, tool: ToolEntity) -> PromptMessageTool: """ convert tool to prompt message tool """ message_tool = PromptMessageTool( name=tool.identity.name, description=tool.description.llm if tool.description else "", parameters={ "type": "object", "properties": {}, "required": [], }, ) parameters = tool.parameters for parameter in parameters: if parameter.form != ToolParameter.ToolParameterForm.LLM: continue parameter_type = parameter.type if parameter.type in { ToolParameter.ToolParameterType.FILE, ToolParameter.ToolParameterType.FILES, }: continue enum = [] if parameter.type == ToolParameter.ToolParameterType.SELECT: enum = [option.value for option in parameter.options] if parameter.options else [] message_tool.parameters["properties"][parameter.name] = { "type": parameter_type, "description": parameter.llm_description or "", } if len(enum) > 0: message_tool.parameters["properties"][parameter.name]["enum"] = enum if parameter.required: message_tool.parameters["required"].append(parameter.name) return message_tool ``` ---------------------------------------- TITLE: Implement Rerank Model Invocation in Python DESCRIPTION: This snippet demonstrates how to implement the `_invoke` method for a Rerank model. It takes a query, a list of documents, and optional parameters like score threshold and top N, returning a `RerankResult`. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/plugins/schema-definition/model/model-schema.mdx#_snippet_9 LANGUAGE: Python CODE: ``` def _invoke(self, model: str, credentials: dict, query: str, docs: list[str], score_threshold: Optional[float] = None, top_n: Optional[int] = None, user: Optional[str] = None) \ -> RerankResult: """ Invoke rerank model :param model: model name :param credentials: model credentials :param query: search query :param docs: docs for reranking :param score_threshold: score threshold :param top_n: top n :param user: unique user id :return: rerank result """ ``` LANGUAGE: APIDOC CODE: ``` Rerank Invocation: _invoke(model: str, credentials: dict, query: str, docs: list[str], score_threshold: Optional[float] = None, top_n: Optional[int] = None, user: Optional[str] = None) Parameters: model (string): Model name credentials (object): Credential information query (string): Search query content docs (array[string]): List of segments to be re-ranked score_threshold (float) [optional]: Score threshold top_n (int) [optional]: Take top n segments user (string) [optional]: Unique user identifier to help providers monitor and detect abuse Return: RerankResult entity ``` ---------------------------------------- TITLE: Python: Implement Model Credential Validation DESCRIPTION: All models must implement `validate_credentials` to verify individual model credentials. This method takes the model name and credentials (defined by `provider_credential_schema` or `model_credential_schema`) as input. A `CredentialsValidateFailedError` should be raised if validation fails. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/guides/model-configuration/interfaces.mdx#_snippet_1 LANGUAGE: python CODE: ``` def validate_credentials(self, model: str, credentials: dict) -> None: """ Validate model credentials :param model: model name :param credentials: model credentials :return: """ ``` ---------------------------------------- TITLE: Implement Bearer Authentication in Hono DESCRIPTION: TypeScript code snippet demonstrating how to implement Bearer authentication using the `hono/bearer-auth` package within a Cloudflare Worker, accessing the token from environment variables for security. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/guides/extension/api-based-extension/cloudflare-workers.mdx#_snippet_6 LANGUAGE: typescript CODE: ``` import { bearerAuth } from "hono/bearer-auth"; (c, next) => { const auth = bearerAuth({ token: c.env.TOKEN }); return auth(c, next); }, ``` ---------------------------------------- TITLE: Implement Google Tool Provider for Credential Validation in Python DESCRIPTION: This Python code defines the `GoogleProvider` class, inheriting from `dify_plugin.ToolProvider`. It implements the `_validate_credentials` method to ensure that the provided API credentials are valid by attempting a test invocation of the `GoogleSearchTool`. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-en/0222-tool-plugin.mdx#_snippet_9 LANGUAGE: python CODE: ``` from typing import Any from dify_plugin import ToolProvider from dify_plugin.errors.tool import ToolProviderCredentialValidationError from tools.google_search import GoogleSearchTool class GoogleProvider(ToolProvider): def _validate_credentials(self, credentials: dict[str, Any]) -> None: try: for _ in GoogleSearchTool.from_credentials(credentials).invoke( tool_parameters={"query": "test", "result_type": "link"} ): pass except Exception as e: raise ToolProviderCredentialValidationError(str(e)) ``` ---------------------------------------- TITLE: Grant Execution Permissions for Dify Plugin Tool (macOS) DESCRIPTION: Before using the Dify plugin scaffolding tool, grant it execution permissions. This command makes the downloaded binary executable on macOS, allowing it to be run from the terminal. Replace `dify-plugin-darwin-arm64` with the actual downloaded file name. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/plugins/publish-plugins/package-plugin-file-and-publish.mdx#_snippet_0 LANGUAGE: bash CODE: ``` chmod +x dify-plugin-darwin-arm64 ``` ---------------------------------------- TITLE: Grant Execution Permissions for Dify Plugin Daemon (macOS) DESCRIPTION: This command grants execute permissions to the downloaded Dify plugin daemon binary on macOS, specifically for M-series chips, allowing it to be run. Navigate to the file's location in the terminal before executing. SOURCE: https://github.com/langgenius/dify-docs/blob/main/plugin-dev-en/0221-initialize-development-tools.mdx#_snippet_0 LANGUAGE: bash CODE: ``` chmod +x dify-plugin-darwin-arm64 ``` ---------------------------------------- TITLE: Python External Data Tool Implementation Class Template DESCRIPTION: Provides a template for creating custom external data tools in Python, demonstrating the structure for defining a tool, validating its configuration, and implementing its data query logic. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/guides/extension/code-based-extension/external-data-tool.mdx#_snippet_3 LANGUAGE: python CODE: ``` from typing import Optional from core.external_data_tool.base import ExternalDataTool class WeatherSearch(ExternalDataTool): """ The name of custom type must be unique, keep the same with directory and file name. """ name: str = "weather_search" @classmethod def validate_config(cls, tenant_id: str, config: dict) -> None: """ schema.json validation. It will be called when user save the config. :param tenant_id: the id of workspace :param config: the variables of form config :return: """ # implement your own logic here def query(self, inputs: dict, query: Optional[str] = None) -> str: """ Query the external data tool. :param inputs: user inputs :param query: the query of chat app :return: the tool query result """ # implement your own logic here return "your own data." ``` ---------------------------------------- TITLE: Start Xinference Local Service DESCRIPTION: This command initiates the Xinference local service, making it available for model deployment and inference. It outputs the endpoint URL and confirms the successful start of the supervisor and worker components. By default, it runs on `http://127.0.0.1:9997` and is accessible only from the local machine. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/development/models-integration/xinference.mdx#_snippet_1 LANGUAGE: bash CODE: ``` $ xinference-local 2023-08-20 19:21:05,265 xinference 10148 INFO Xinference successfully started. Endpoint: http://127.0.0.1:9997 2023-08-20 19:21:05,266 xinference.core.supervisor 10148 INFO Worker 127.0.0.1:37822 has been added successfully 2023-08-20 19:21:05,267 xinference.deploy.worker 10148 INFO Xinference worker successfully started. ``` ---------------------------------------- TITLE: Local Cloudflare Worker Development Server Output DESCRIPTION: Example console output when running the Cloudflare Worker locally, showing the worker bindings (including masked environment variables) and the local server address for testing. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/guides/extension/api-based-extension/cloudflare-workers.mdx#_snippet_5 LANGUAGE: bash CODE: ``` $ npm run dev > dev > wrangler dev src/index.ts ⛅️ wrangler 3.99.0 ------------------- Your worker has access to the following bindings: - Vars: - TOKEN: "ban****ool" ⎔ Starting local server... [wrangler:inf] Ready on http://localhost:58445 ``` ---------------------------------------- TITLE: Execute Dify Local to Cloud Storage Migration Commands DESCRIPTION: Commands to transfer private key files and user-uploaded files from Dify's local storage to a configured cloud storage provider. Separate commands are provided for direct source code deployments and Docker Compose environments. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/getting-started/install-self-hosted/install-faq.mdx#_snippet_17 LANGUAGE: bash CODE: ``` flask upload-private-key-file-to-cloud-storage flask upload-local-files-to-cloud-storage ``` LANGUAGE: bash CODE: ``` docker exec -it docker-api-1 flask upload-private-key-file-to-cloud-storage docker exec -it docker-api-1 flask upload-local-files-to-cloud-storage ``` ---------------------------------------- TITLE: Starting Local Development Server for Dify Documentation DESCRIPTION: This command initiates the local development server for Dify documentation using the Mintlify CLI. It should be run from the project's root directory where the docs.json configuration file is located, allowing live preview of changes. SOURCE: https://github.com/langgenius/dify-docs/blob/main/README.md#_snippet_1 LANGUAGE: bash CODE: ``` mintlify dev ``` ---------------------------------------- TITLE: Dify Workflow: Handling Multiple Uploaded Files with Iterative Nodes DESCRIPTION: This snippet explains how to configure Dify to process multiple uploaded files using an iterative node. It highlights the requirement for the input variable to be an array type and how to adjust the start node accordingly. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/workshop/intermediate/article-reader.mdx#_snippet_2 LANGUAGE: Dify Configuration CODE: ``` To handle multiple uploaded files, an iterative node is needed. The input variable for the iterative node can only be of type `array` (list). Adjust the file variable in the start node to an `array` type, i.e., a file list. ``` ---------------------------------------- TITLE: Starting Dify Web Service Locally with PNPM DESCRIPTION: This command starts the Dify web service locally, making it accessible via a web browser. After a successful build, this command launches the Next.js server, typically on port 3000, allowing users to access the application at `http://localhost:3000`. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/getting-started/install-self-hosted/local-source-code.mdx#_snippet_17 LANGUAGE: Shell CODE: ``` pnpm start ``` ---------------------------------------- TITLE: Implement Dify Moderation Model Invocation in Python DESCRIPTION: This snippet illustrates the `_invoke` method for the `ModerationModel` base class. It defines the parameters for moderating text content, such as the model, credentials, and the text itself, returning a boolean indicating if the text is safe. SOURCE: https://github.com/langgenius/dify-docs/blob/main/ja-jp/guides/model-configuration/interfaces.mdx#_snippet_10 LANGUAGE: python CODE: ``` def _invoke(self, model: str, credentials: dict, text: str, user: Optional[str] = None) \ -> bool: """ Invoke large language model :param model: model name :param credentials: model credentials :param text: text to moderate :param user: unique user id :return: false if text is safe, true otherwise """ ``` LANGUAGE: APIDOC CODE: ``` _invoke(model: str, credentials: dict, text: str, user: Optional[str] = None) -> bool Parameters: model (string): Model name credentials (object): Credential information. The parameters are defined by `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file, including `api_key`. text (string): Text content user (string) [optional]: Unique identifier of the user. This can help the provider monitor and detect abusive behavior. Returns: False indicates that the input text is safe, True indicates otherwise. ``` ---------------------------------------- TITLE: Deploy OpenLLM Server with Docker DESCRIPTION: This command initiates an OpenLLM server within a Docker container, making a specified large language model available for inference on port 3333. The example uses the `facebook/opt-1.3b` model, which can be replaced with other supported models. SOURCE: https://github.com/langgenius/dify-docs/blob/main/en/development/models-integration/openllm.mdx#_snippet_0 LANGUAGE: bash CODE: ``` docker run --rm -it -p 3333:3000 ghcr.io/bentoml/openllm start facebook/opt-1.3b --backend pt ``` ---------------------------------------- TITLE: Validating Moderation Configuration in Python DESCRIPTION: This method, `validate_config`, is called when a user saves the moderation configuration. It performs schema validation for the `config` dictionary, ensuring that `inputs_config` and `outputs_config` are properly structured. Developers should implement custom validation logic here. SOURCE: https://github.com/langgenius/dify-docs/blob/main/zh-hans/guides/tools/extensions/code-based/moderation.mdx#_snippet_2 LANGUAGE: Python CODE: ``` @classmethod def validate_config(cls, tenant_id: str, config: dict) -> None: """ schema.json validation. It will be called when user save the config. :param tenant_id: the id of workspace :param config: the variables of form config :return: """ cls._validate_inputs_and_outputs_config(config, True) # implement your own logic here ``` ---------------------------------------- TITLE: APIDOC: LLMResultChunk Class Definition DESCRIPTION: Defines the LLMResultChunk class, representing each full iteration entity in streaming LLM responses. It combines model information, prompt messages, system fingerprint, and the delta changes. SOURCE: https://github.com/langgenius/dify-docs/blob/main/ja-jp/guides/model-configuration/interfaces.mdx#_snippet_24 LANGUAGE: python CODE: ``` class LLMResultChunk(BaseModel): """ Model class for llm result chunk. """ model: str # Actual used modele prompt_messages: list[PromptMessage] # prompt messages system_fingerprint: Optional[str] = None # request fingerprint, refer to OpenAI definition delta: LLMResultChunkDelta ```
웹에서 더 최신 정보를 찾아보겠습니다.
Dify MCP server implementation guide 2024
10 results
Turn Your Dify App into an MCP Server – Dify Blogdify.ai
Using MCP Tools – Dify Docsdify.ai
Dify MCP server for AI agentsplaybooks.com
Dify MCP Server – Claude MCP Serversclaudemcp.com
Model Context Protocol (MCP) Server for dify workflows | MCP Servers · LobeHublobehub.com
Dify v1.6.0: Built-in Two-Way MCP Support – Dify Blogdify.ai
Dify MCP Plugin Guide: Connect Zapier and Automate Email Delivery with Ease | Difydify.ai
Fetched Using MCP Tools – Dify Docs
dify.ai
Fetched Turn Your Dify App into an MCP Server – Dify Blog
dify.ai
Dify에서 MCP(Model Context Protocol) 서버를 구현하는 방법에 대해 자세히 알려드리겠습니다.
Dify에서 MCP 서버 구현하기 #
Dify v1.6.0부터 MCP가 네이티브로 지원되며, 두 가지 방식으로 MCP를 활용할 수 있습니다:
1. Dify 앱을 MCP 서버로 변환하기 #
mcp-server 플러그인을 통해 모든 Dify 앱을 MCP 호환 서버 엔드포인트로 변환할 수 있습니다.
설정 단계: #
- 플러그인 설치
- Dify 마켓플레이스에서
mcp-server플러그인을 다운로드하고 설치합니다.
- Dify 마켓플레이스에서
- 앱 선택
- MCP 서버로 변환할 Dify 앱을 선택합니다 (예: Agent, Chatflow, Workflow 등).
- MCP 서버 엔드포인트 설정
- Endpoint Name: 엔드포인트 이름 설정
- App: MCP 서버로 게시할 Dify 앱 선택
- App Type: Chat 또는 Workflow 앱 타입 선택
- App Input Schema: 앱의 입력 매개변수를 JSON으로 정의
- 스키마 예시:
json
{
"properties": {
"query": {
"type": "string",
"description": "Research question or topic"
},
"depth": {
"type": "integer",
"description": "Number of search rounds (1-5)",
"minimum": 1,
"maximum": 5
}
},
"required": ["query"],
"description": "Conducts deep research on a given topic through multiple search rounds"
}
- 엔드포인트 URL 획득
- 설정 저장 후 플러그인이 고유한 Endpoint URL을 생성합니다.
- 이 URL은 HTTP 및 SSE 프로토콜을 지원합니다.
2. 외부 MCP 서버를 Dify에서 사용하기 #
외부 MCP 서버의 도구를 Dify 에이전트 및 워크플로 애플리케이션에서 직접 사용할 수 있습니다.
설정 방법: #
- MCP 서버 관리 접근
- Dify 워크스페이스에서 Tools → MCP로 이동
- MCP 서버 추가
- “Add MCP Server (HTTP)” 클릭
- Server URL 입력 (예:
https://api.notion.com/mcp) - 이름 및 아이콘 설정
- Server Identifier 설정 (고유 ID, 변경 불가)
- 도구 발견 및 승인
- Dify가 자동으로 사용 가능한 도구를 검색
- OAuth 플로우 처리 (필요한 경우)
- 도구 정의 가져오기
3. 기술적 구현 세부사항 #
MCP 서버 설정 (JSON): #
json
{
"server_name": {
"url": "https://your-mcp-server-url/sse",
"headers": {},
"timeout": 5,
"sse_read_timeout": 300
}
}
플러그인 환경 변수 (.env): #
bash
INSTALL_METHOD=remote
REMOTE_INSTALL_HOST=<your-dify-domain-or-ip>
REMOTE_INSTALL_PORT=5003
REMOTE_INSTALL_KEY=****-****-****-****-****
4. 활용 예제 #
Cursor IDE에서 Dify MCP 서버 사용: #
json
{
"mcpServers": {
"dify-deep-research": {
"url": "sse+http://your-dify-endpoint-url"
}
}
}
5. 보안 및 베스트 프랙티스 #
- 보안: 개인 네트워크 환경에서만 MCP 서버 플러그인 실행 권장
- 서버 ID 일관성: 한 번 설정한 서버 ID는 변경하지 않기
- 환경 일관성: 개발, 스테이징, 프로덕션 환경에서 동일한 MCP 서버 설정 유지
- 점진적 업데이트: 외부 서비스가 MCP 서버를 업데이트할 때 프로덕션 전에 개발 환경에서 테스트
6. 문제 해결 #
- “Unconfigured Server” 오류: 서버 URL 확인 및 재승인
- 누락된 도구: “Update Tools” 버튼 클릭으로 도구 목록 새로고침
- 애플리케이션 오류: 서버 ID 변경 시 기존 애플리케이션에서 도구 오류 발생 가능
이렇게 Dify에서는 플러그인 시스템을 통해 MCP 서버를 구현하거나 외부 MCP 서버를 클라이언트로 사용할 수 있으며, 향후 업데이트에서는 더욱 간편한 네이티브 MCP 지원이 제공될 예정입니다.
