Skip to main content

Selecting a Model

  • Whenever you start a new chat, you can select which model to use from the dropdown in the top left
  • You can switch models mid-conversation - for example, start with a fast model for brainstorming, then switch to a more powerful one for the final output
  • Set your personal default model in account settings

Understanding Model Naming Conventions

AI providers follow consistent naming patterns that help you quickly identify a model’s capabilities. Understanding these patterns lets you choose the right model without memorizing specific versions.

Version Numbers = Capability Level

Higher version numbers generally indicate newer, more capable models. When a provider releases a new generation, they increment the major version number.
PatternWhat it means
GPT-5 vs GPT-4GPT-5 is the newer generation
Claude 4 vs Claude 3Claude 4 is the newer generation
Gemini 2.5 vs Gemini 2.0Gemini 2.5 is newer within the same generation
When in doubt, choose the model with the higher version number - it typically has better reasoning, fewer errors, and more capabilities.

Size Indicators = Speed vs Intelligence Trade-off

Providers offer multiple sizes within each model family. Models without size indicators are the most intelligent but may be slower. Models with size indicators trade some capability for speed and cost efficiency.
IndicatorIntelligenceSpeedBest for
No indicator (e.g., “GPT-5”, “Claude Sonnet”)HighestModerateComplex tasks, important outputs
mini / nanoMedium-HighFastEveryday tasks, quick iterations
flash / fastMediumVery FastReal-time applications, high volume
haiku (Anthropic)GoodVery FastSimple tasks, cost-sensitive use cases
Pro tip: Start with a faster model for drafts and exploration, then switch to the full model for your final output. This saves time while still getting high-quality results when it matters.

Reasoning/Thinking Variants = Deep Analysis

Some models have “Reasoning” or “Thinking” variants (e.g., “GPT-5 Thinking”, “Claude Opus Reasoning”). These are specifically optimized for:
  • Complex multi-step problems
  • Mathematical and scientific analysis
  • Logical deduction and planning
  • Code architecture decisions
These models take more time to respond because they “think through” problems step-by-step, but they produce more accurate results on challenging tasks.

Provider Tiers

Each provider organizes their models into tiers:
TierExamplesUse Case
FlagshipGPT-5, GPT-5.xMost capable, best for complex tasks
Reasoningo-series (o3, o4)Deep analytical tasks
Efficientmini, nano variantsFast, cost-effective

Choosing the Right Model

By Task Type

TaskRecommended Model TypeWhy
Quick questions, brainstormingFast/mini variantsSpeed matters, good enough quality
Writing emails, documentsStandard flagshipGood balance of quality and speed
Complex analysis, researchFlagship or Reasoning variantsNeed highest accuracy
Coding and debuggingAnthropic Sonnet or Reasoning modelsStrong at structured tasks
Creative writingAnthropic modelsKnown for natural, authentic tone
Long documentsGoogle GeminiExcellent long-context handling
Math and scienceReasoning/Thinking variantsStep-by-step problem solving

Quick Decision Guide

Is this a simple, quick task?
├─ Yes → Use a fast/mini/flash model
└─ No → Is deep reasoning required?
         ├─ Yes → Use a Reasoning/Thinking model
         └─ No → Use the standard flagship model

Our Recommendations

For Everyday Tasks

Use the current flagship model from OpenAI or Anthropic. These provide the best balance of capability and speed for general use. Look for models without size indicators (no “mini”, “fast”, etc.).

For Coding and Writing

Anthropic’s Sonnet models are consistently praised for natural-sounding text and strong coding capabilities. They have an authentic tone that works well for professional communication.

For Complex Reasoning

Use Thinking/Reasoning variants when you need maximum accuracy on analytical tasks. These take longer but significantly reduce errors on complex problems.

For Speed-Sensitive Tasks

Flash, mini, or nano variants deliver good results much faster. Perfect for real-time applications, iterating on ideas, or processing high volumes.

Image Generation Models

Image models also follow naming patterns:
ProviderModelsStrengths
Black Forest LabsFlux seriesState-of-the-art quality, fast generation
GoogleImagen seriesDiverse art styles, photo realism
OpenAIDALL-E, GPT ImageText-to-image, integrated with chat
For image generation, “Fast” variants prioritize speed while standard versions prioritize quality. Choose based on whether you need quick iterations or final-quality output.

Staying Current

AI models evolve rapidly. To stay current:
  1. Check the model selector - Langdock always shows the latest available models
  2. Look for version numbers - Higher numbers = newer capabilities
  3. Try new models - When a new version appears, test it on your typical tasks
  4. Read release notes - Providers announce major improvements with new releases
Langdock continuously adds new models as they become available. The model selector in your chat always reflects the current options.