Documentation Index
Fetch the complete documentation index at: https://docs.langdock.com/llms.txt
Use this file to discover all available pages before exploring further.
Which models to add
AI models evolve quickly. We recommend adding at least one flagship model from each major provider. This gives your users access to the best available model for different tasks while keeping your setup manageable. When a provider releases a new model, add it alongside the existing one rather than replacing it. Users may have agents or workflows that rely on specific models, so removing them without notice can cause disruption. Select a provider to see the recommended model types and their configuration values.Model-specific configuration
Use these values when configuring models manually. With a prebuilt Langdock config, these are applied automatically.- OpenAI
- Anthropic
- Google
- Others
Add these model types:
- Latest flagship - Most capable for complex tasks
- Efficient variant (mini/nano) - Fast, cost-effective for everyday use
- Reasoning model (o-series) - For analytical and mathematical tasks
GPT-5.2
| Model | API Type | Context Size | Max Output Tokens | Special Configuration |
|---|---|---|---|---|
| GPT-5.2 | Responses API | 400,000 | 32,000 | Reasoning: minimal, verbosity: low |
| GPT-5.2 (Thinking) | Responses API | 400,000 | 32,000 | Reasoning: high, verbosity: low |
| GPT-5.2 Pro | Responses API | 400,000 | 32,000 | Max reasoning depth |
GPT-5.1
| Model | API Type | Context Size | Max Output Tokens | Special Configuration |
|---|---|---|---|---|
| GPT-5.1 | Responses API | 400,000 | 32,000 | Reasoning: minimal, verbosity: low |
| GPT-5.1 (Thinking) | Responses API | 400,000 | 32,000 | Reasoning: high, verbosity: low |
| GPT-5.1 Chat | Responses API | 128,000 | 16,000 | Azure EU global deployment or OpenAI only |
GPT-5
| Model | API Type | Context Size | Max Output Tokens | Special Configuration |
|---|---|---|---|---|
| GPT-5 | Responses API | 400,000 | 32,000 | Reasoning: minimal, verbosity: low |
| GPT-5 (Thinking) | Responses API | 400,000 | 32,000 | Reasoning: high, verbosity: low |
| GPT-5 Chat | Responses API | 128,000 | 16,000 | \u2014 |
| GPT-5 mini | Responses API | 400,000 | 16,000 | \u2014 |
| GPT-5 nano | Responses API | 400,000 | 16,000 | \u2014 |
GPT-4.1
| Model | API Type | Context Size | Max Output Tokens | Special Configuration |
|---|---|---|---|---|
| GPT-4.1 | Completion API | 1,047,576 | 32,000 | Good for data analysis |
| GPT-4.1 mini | Completion API | 1,047,576 | 32,000 | \u2014 |
| GPT-4.1 nano | Completion API | 1,047,576 | 32,000 | \u2014 |
GPT-4o
| Model | API Type | Context Size | Max Output Tokens | Special Configuration |
|---|---|---|---|---|
| GPT-4o | Completion API | 128,000 | 16,000 | \u2014 |
| GPT-4o Mini | Completion API | 128,000 | 16,000 | \u2014 |
Reasoning models (o-series)
| Model | API Type | Context Size | Max Output Tokens | Special Configuration |
|---|---|---|---|---|
| o3 | Responses API | 200,000 | 32,000 | \u2014 |
| o3 Mini | Completion API | 200,000 | 32,000 | Model ID: o3-mini |
| o3 Mini high | Completion API | 200,000 | 32,000 | Model ID: o3-mini and set reasoning effort to high |
| o4 Mini | Responses API | 200,000 | 32,000 | \u2014 |
| o4 Mini high | Responses API | 200,000 | 32,000 | Model ID: o4-mini, effort: high |
| o1 | Completion API | 200,000 | 32,000 | \u2014 |
For the most up-to-date model information and capabilities, check the model picker in app.langdock.com. Model naming follows consistent patterns across providers — see our Model Guide for help understanding these patterns.