Skip to main content
Perplexity models can be configured in Langdock using the Perplexity API. These models are particularly useful for search-augmented generation, combining LLM capabilities with real-time web search.

Prerequisites

Before setting up Perplexity models, you need:
  1. A Perplexity account at perplexity.ai
  2. An API key from the Perplexity API settings
  3. Admin access to your Langdock workspace

Setup Steps

  1. Go to the model settings and click on Add Model
  2. Configure the Display Settings:
    • Provider: Perplexity
    • Model name: The model you want to add (e.g., Sonar Pro, Sonar)
    • Hosting provider: Perplexity
    • Region: US (Perplexity’s API is hosted in the US)
    • Image analysis: Disabled (Perplexity models do not support vision)
  3. Configure the Model Configuration:
    • SDK: Select Perplexity
    • Base URL: Leave empty to use the default (https://api.perplexity.ai) or specify a custom endpoint
    • Model ID: Use the official model identifier (see Model IDs below)
    • API key: Paste your Perplexity API key
    • Context Size: Set according to the model (check Perplexity’s documentation for current values)
  4. Click Save and test the model by sending a prompt before making it visible to all users

Model IDs

Perplexity’s Sonar model family includes search-augmented and reasoning variants:
Model IDType
sonar-proAdvanced search-augmented generation — detailed responses with citations
sonarFast search-augmented responses — good for general-purpose queries
sonar-reasoning-proDeep analysis with search — multi-step reasoning with citations
sonar-reasoningReasoning with search augmentation
Check Perplexity’s model documentation for the full list of available models, their specifications, and context sizes.

Configuration Notes

  • Perplexity models include built-in web search capabilities, so they always have access to current information
  • The API endpoint https://api.perplexity.ai is automatically used when no custom Base URL is provided
  • Sonar Pro models provide more detailed responses with better source citations
  • Reasoning variants are best for complex analytical tasks that benefit from step-by-step thinking

Troubleshooting

Model not responding:
  • Verify your API key is valid and has not expired
  • Check that you have sufficient credits in your Perplexity account
  • Ensure the model ID matches exactly (case-sensitive)
Missing citations:
  • Perplexity models include citations automatically when web search is used
  • If citations are missing, the model may have answered from its base knowledge
Slow responses:
  • Perplexity models perform web searches, which adds latency
  • Sonar (non-Pro) variants are faster than Pro versions
  • For time-sensitive tasks without search needs, consider using a different model
If you run into any issues, contact support@langdock.com.