Langdock requires three types of models to cover all platform features. Add them in Workspace Settings -> Models. For setup instructions per model type, see Adding your own models.Documentation Index
Fetch the complete documentation index at: https://docs.langdock.com/llms.txt
Use this file to discover all available pages before exploring further.
Set up your models in Langdock
Completion models
These are the models your users select in chat. Add the models you want to make available from the providers you have keys for. We support models hosted by Microsoft Azure, AWS Bedrock, Google Vertex AI, Google AI Studio, OpenAI, Anthropic, Mistral, DeepSeek, Perplexity, Black Forest Labs, Replicate, and any OpenAI-compatible endpoint. For quotas, between 200k and 500k TPM (tokens per minute) covers around 200 users. For your most-used model, you may need 500k–1M TPM.We recommend setting up multiple deployments in different regions. If a model fails in one region, Langdock automatically retries in another.
Image generation models
Add at least one image generation model so users can generate images from chat. See the Image tab in the adding models guide for setup instructions.Embedding model
Embedding models power document search and Knowledge Folders. The platform requires a model with 1536 dimensions. Any OpenAI or Azure-compatible embedding model with 1536 dimensions will work.Deep Research models
Deep Research requires three dedicated models. Go to Settings > Products > Deep Research and assign one model to each role:| Role | What it does | Recommended |
|---|---|---|
| Reasoning Model | Plans the research, evaluates results, and generates the final report. | o3 |
| Fast Reasoning Model | Generates per-task summaries during the research loop. | o4 Mini |
| Backbone Model | Handles loop decisions and polishes the final report. | GPT-5 Mini |
Recommended settings: Reasoning and Fast Reasoning models
Recommended settings: Reasoning and Fast Reasoning models
| Setting | Recommended value |
|---|---|
| API Type | Responses API |
| Reasoning Effort | Medium |
| Verbosity | Medium |
| Supports Temperature | Disabled |
| Supports Tools | Enabled |
Recommended settings: Backbone model
Recommended settings: Backbone model
| Setting | Recommended value |
|---|---|
| API Type | Responses API |
| Reasoning Effort | Minimal |
| Verbosity | Low |
| Supports Temperature | Disabled |
| Strict Mode | Disabled |
| Supports Tools | Enabled |
- Completion models from the providers you want to offer
- 1 or more image generation models
- 1x Embedding model (1536 dimensions)
- 3x Deep Research models configured with the correct settings above
2. Reach out to the Langdock team
Once your models are set up, contact the Langdock team, and we will activate BYOK for your workspace on our side.3. Test your models
Once BYOK is active, verify each model type works:- Completion models: send a prompt to each model in the interface (e.g. “write a story about dogs”).
- Image model: ask any model to generate an image.
- Embedding model: create a folder in the Library, upload a file, then @mention the folder in a chat and ask a question about the file. You should get an answer based on the file content.
- Deep Research: run a Deep Research query and check that the report is detailed and well-structured. If it’s shallow, recheck the model configuration settings above.