Selecting a Model
- Whenever you start a new chat, you can select which model to use from the dropdown in the top left
- You can switch models mid-conversation - for example, start with a fast model for brainstorming, then switch to a more powerful one for the final output
- Set your personal default model in account settings
Understanding Model Naming Conventions
AI providers follow consistent naming patterns that help you quickly identify a model’s capabilities. Understanding these patterns lets you choose the right model without memorizing specific versions.
Version Numbers = Capability Level
Higher version numbers generally indicate newer, more capable models. When a provider releases a new generation, they increment the major version number.
| Pattern | What it means |
|---|
| GPT-5 vs GPT-4 | GPT-5 is the newer generation |
| Claude 4 vs Claude 3 | Claude 4 is the newer generation |
| Gemini 2.5 vs Gemini 2.0 | Gemini 2.5 is newer within the same generation |
When in doubt, choose the model with the higher version number - it typically has better reasoning, fewer errors, and more capabilities.
Size Indicators = Speed vs Intelligence Trade-off
Providers offer multiple sizes within each model family. Models without size indicators are the most intelligent but may be slower. Models with size indicators trade some capability for speed and cost efficiency.
| Indicator | Intelligence | Speed | Best for |
|---|
| No indicator (e.g., “GPT-5”, “Claude Sonnet”) | Highest | Moderate | Complex tasks, important outputs |
| mini / nano | Medium-High | Fast | Everyday tasks, quick iterations |
| flash / fast | Medium | Very Fast | Real-time applications, high volume |
| haiku (Anthropic) | Good | Very Fast | Simple tasks, cost-sensitive use cases |
Pro tip: Start with a faster model for drafts and exploration, then switch to the full model for your final output. This saves time while still getting high-quality results when it matters.
Reasoning/Thinking Variants = Deep Analysis
Some models have “Reasoning” or “Thinking” variants (e.g., “GPT-5 Thinking”, “Claude Opus Reasoning”). These are specifically optimized for:
- Complex multi-step problems
- Mathematical and scientific analysis
- Logical deduction and planning
- Code architecture decisions
These models take more time to respond because they “think through” problems step-by-step, but they produce more accurate results on challenging tasks.
Provider Tiers
Each provider organizes their models into tiers:
OpenAI
Anthropic
Google
Others
| Tier | Examples | Use Case |
|---|
| Flagship | GPT-5, GPT-5.x | Most capable, best for complex tasks |
| Reasoning | o-series (o3, o4) | Deep analytical tasks |
| Efficient | mini, nano variants | Fast, cost-effective |
| Tier | Examples | Use Case |
|---|
| Opus | Claude Opus | Most intelligent, complex reasoning |
| Sonnet | Claude Sonnet | Balanced intelligence and speed |
| Haiku | Claude Haiku | Fast, efficient for simpler tasks |
| Tier | Examples | Use Case |
|---|
| Pro | Gemini Pro | Most capable, complex tasks |
| Flash | Gemini Flash | Fast, real-time applications |
| Provider | Flagship | Notes |
|---|
| Mistral | Mistral Large | Strong multilingual, coding |
| Meta | LLaMA | Open-source, efficient |
| DeepSeek | DeepSeek R1 | Strong reasoning, coding |
Choosing the Right Model
By Task Type
| Task | Recommended Model Type | Why |
|---|
| Quick questions, brainstorming | Fast/mini variants | Speed matters, good enough quality |
| Writing emails, documents | Standard flagship | Good balance of quality and speed |
| Complex analysis, research | Flagship or Reasoning variants | Need highest accuracy |
| Coding and debugging | Anthropic Sonnet or Reasoning models | Strong at structured tasks |
| Creative writing | Anthropic models | Known for natural, authentic tone |
| Long documents | Google Gemini | Excellent long-context handling |
| Math and science | Reasoning/Thinking variants | Step-by-step problem solving |
Quick Decision Guide
Is this a simple, quick task?
├─ Yes → Use a fast/mini/flash model
└─ No → Is deep reasoning required?
├─ Yes → Use a Reasoning/Thinking model
└─ No → Use the standard flagship model
Our Recommendations
For Everyday Tasks
Use the current flagship model from OpenAI or Anthropic. These provide the best balance of capability and speed for general use. Look for models without size indicators (no “mini”, “fast”, etc.).
For Coding and Writing
Anthropic’s Sonnet models are consistently praised for natural-sounding text and strong coding capabilities. They have an authentic tone that works well for professional communication.
For Complex Reasoning
Use Thinking/Reasoning variants when you need maximum accuracy on analytical tasks. These take longer but significantly reduce errors on complex problems.
For Speed-Sensitive Tasks
Flash, mini, or nano variants deliver good results much faster. Perfect for real-time applications, iterating on ideas, or processing high volumes.
Image Generation Models
Image models also follow naming patterns:
| Provider | Models | Strengths |
|---|
| Black Forest Labs | Flux series | State-of-the-art quality, fast generation |
| Google | Imagen series | Diverse art styles, photo realism |
| OpenAI | DALL-E, GPT Image | Text-to-image, integrated with chat |
For image generation, “Fast” variants prioritize speed while standard versions prioritize quality. Choose based on whether you need quick iterations or final-quality output.
Staying Current
AI models evolve rapidly. To stay current:
- Check the model selector - Langdock always shows the latest available models
- Look for version numbers - Higher numbers = newer capabilities
- Try new models - When a new version appears, test it on your typical tasks
- Read release notes - Providers announce major improvements with new releases
Langdock continuously adds new models as they become available. The model selector in your chat always reflects the current options.