Skip to main content
POST
/
assistant
/
v1
/
chat
/
completions
curl --request POST \
  --url https://api.langdock.com/assistant/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "assistantId": "asst_123",
  "messages": [
    {
      "role": "user",
      "content": "Hello, how can you help me?"
    }
  ],
  "stream": true
}
'
{
  "result": [
    {
      "id": "<string>",
      "role": "tool",
      "content": [
        {
          "type": "<string>",
          "toolCallId": "<string>",
          "toolName": "<string>",
          "result": {},
          "args": {},
          "text": "<string>"
        }
      ]
    }
  ],
  "output": {}
}
The Assistants API will be deprecated in a future release.For new projects, we recommend using the Agents API. The Agents API provides native Vercel AI SDK compatibility and removes custom transformations.See the migration guide to learn about the differences.
Creates a model response for a given assistant id, or pass in an Assistant configuration that should be used for your request.
To share an assistant with an API key, follow this guide

Request Parameters

ParameterTypeRequiredDescription
assistantIdstringOne of assistantId/assistant requiredID of an existing assistant to use
assistantobjectOne of assistantId/assistant requiredConfiguration for a new assistant
messagesarrayYesArray of message objects with role and content
streambooleanNoEnable streaming responses (default: false)
outputobjectNoStructured output format specification

Message Format

Each message in the messages array should contain:
  • role (required) - One of: “user”, “assistant”, or “tool”
  • content (required) - The message content as a string
  • attachmentIds (optional) - Array of UUID strings identifying attachments for this message

Agent Configuration

When creating a temporary assistant, you can specify:
  • name (required) - Name of the assistant (max 64 chars)
  • instructions (required) - System instructions (max 16384 chars)
  • description - Optional description (max 256 chars)
  • temperature - Temperature between 0-1
  • model - Model ID to use (see Available Models for options)
  • capabilities - Enable features like web search, data analysis, image generation
  • actions - Custom API integrations
  • vectorDb - Vector database connections
  • knowledgeFolderIds - IDs of knowledge folders to use
  • attachmentIds - Array of UUID strings identifying attachments to use
You can retrieve a list of available models using the Models API. This is useful when you want to see which models you can use in your assistant configuration.

Using Tools via API

When an assistant has tools configured (called “Actions” in the Langdock UI), it will automatically use them to respond to API requests when appropriate. The connection must be set to “preselected connection” (shared with other users) for tool authentication to work.
Preselected connection setting in assistant configuration
Tools with “Require human confirmation” enabled do not work via API—they require manual approval in the Langdock UI. To use a tool via API, disable this setting in the assistant configuration.

Structured Output

You can specify a structured output format using the optional output parameter:
FieldTypeDescription
type”object” | “array” | “enum”The type of structured output
schemaobjectJSON Schema definition for the output (for object/array types)
enumstring[]Array of allowed values (for enum type)
The output parameter behavior depends on the specified type:
  • type: "object" with no schema: Forces the response to be a single JSON object (no specific structure)
  • type: "object" with schema: Forces the response to match the provided JSON Schema
  • type: "array" with schema: Forces the response to be an array of objects matching the provided schema
  • type: "enum": Forces the response to be one of the values specified in the enum array
You can use tools like easy-json-schema to generate JSON Schemas from example JSON objects.

Streaming Responses

When stream is set to true, the API will return a stream of server-sent events (SSE) instead of waiting for the complete response. This allows you to display responses to users progressively as they are generated.

Stream Format

Each event in the stream follows the SSE format with JSON data:
data: {"type":"message","content":"Hello"}
data: {"type":"message","content":" world"}
data: {"type":"done"}

Handling Streams in JavaScript

const response = await fetch('https://api.langdock.com/assistant/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    assistantId: 'asst_123',
    messages: [{ role: 'user', content: 'Hello' }],
    stream: true
  }),
});

const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;

  const chunk = decoder.decode(value);
  const lines = chunk.split('\n');

  for (const line of lines) {
    if (line.startsWith('data: ')) {
      const data = JSON.parse(line.slice(6));
      if (data.type === 'message') {
        process.stdout.write(data.content);
      }
    }
  }
}

Obtaining Attachment IDs

To use attachments in your assistant conversations, you first need to upload the files using the Upload Attachment API. This will return an attachmentId for each file, which you can then include in the attachmentIds array in your assistant or message configuration.

Examples

Using an Existing Agent

const axios = require("axios");

async function chatWithAssistant() {
  const response = await axios.post(
    "https://api.langdock.com/assistant/v1/chat/completions",
    {
      assistantId: "asst_123",
      messages: [
        {
          role: "user",
          content: "Can you analyze this document for me?",
          attachmentIds: ["550e8400-e29b-41d4-a716-446655440000"], // Obtain attachmentIds from upload attachment endpoint
        },
      ],
      stream: true, // Enable streaming responses
    },
    {
      headers: {
        Authorization: "Bearer YOUR_API_KEY",
      },
    }
  );

  console.log(response.data.result);
}

Using a temporary Agent configuration

const axios = require("axios");

async function chatWithNewAssistant() {
  const response = await axios.post(
    "https://api.langdock.com/assistant/v1/chat/completions",
    {
      assistant: {
        name: "Document Analyzer",
        instructions:
          "You are a helpful assistant who analyzes documents and answers questions about them",
        temperature: 0.7,
        model: "gpt-4",
        capabilities: {
          webSearch: true,
          dataAnalyst: true,
        },
        attachmentIds: ["550e8400-e29b-41d4-a716-446655440000"], // Obtain attachmentIds from upload attachment endpoint
      },
      messages: [
        {
          role: "user",
          content: "What are the key points in the document?",
        },
      ],
    },
    {
      headers: {
        Authorization: "Bearer YOUR_API_KEY",
      },
    }
  );

  console.log(response.data.result);
}

Using Structured Output with Schema

const axios = require("axios");

async function getStructuredWeather() {
  const response = await axios.post(
    "https://api.langdock.com/assistant/v1/chat/completions",
    {
      assistant: {
        name: "Weather Agent",
        instructions: "You are a helpful weather assistant",
        model: "gpt-5.1",
        capabilities: {
          webSearch: true,
        },
      },
      messages: [
        {
          role: "user",
          content: "What's the weather in paris, berlin and london today?",
        },
      ],
      output: {
        type: "array",
        schema: {
          type: "object",
          properties: {
            weather: {
              type: "object",
              properties: {
                city: {
                  type: "string",
                },
                tempInCelsius: {
                  type: "number",
                },
                tempInFahrenheit: {
                  type: "number",
                },
              },
              required: ["city", "tempInCelsius", "tempInFahrenheit"],
            },
          },
        },
      },
    },
    {
      headers: {
        Authorization: "Bearer YOUR_API_KEY",
      },
    }
  );

  // Access the structured data directly from output
  console.log(response.data.output);
  // Output:
  // [
  //   { "weather": { "city": "Paris", "tempInCelsius": 1, "tempInFahrenheit": 33 } },
  //   { "weather": { "city": "Berlin", "tempInCelsius": 1, "tempInFahrenheit": 35 } },
  //   { "weather": { "city": "London", "tempInCelsius": 7, "tempInFahrenheit": 45 } }
  // ]
}

Using Structured Output with Object

const axios = require("axios");

async function extractContactInfo() {
  const response = await axios.post(
    "https://api.langdock.com/assistant/v1/chat/completions",
    {
      assistant: {
        name: "Contact Extractor",
        instructions: "You extract contact information from text",
      },
      messages: [
        {
          role: "user",
          content:
            "Extract the contact info: John Smith is our new sales lead. You can reach him at john.smith@example.com or call +1-555-123-4567.",
        },
      ],
      output: {
        type: "object",
        schema: {
          type: "object",
          properties: {
            name: {
              type: "string",
            },
            email: {
              type: "string",
            },
            phone: {
              type: "string",
            },
            role: {
              type: "string",
            },
          },
          required: ["name", "email"],
        },
      },
    },
    {
      headers: {
        Authorization: "Bearer YOUR_API_KEY",
      },
    }
  );

  // Access the structured data directly from output
  console.log(response.data.output);
  // Output:
  // {
  //   "name": "John Smith",
  //   "email": "john.smith@example.com",
  //   "phone": "+1-555-123-4567",
  //   "role": "sales lead"
  // }
}

Using Structured Output with Enum

const axios = require("axios");

async function getSentimentAnalysis() {
  const response = await axios.post(
    "https://api.langdock.com/assistant/v1/chat/completions",
    {
      assistant: {
        name: "Sentiment Analyzer",
        instructions: "You analyze the sentiment of text",
      },
      messages: [
        {
          role: "user",
          content:
            "How would you rate this review: 'This product exceeded my expectations!'",
        },
      ],
      output: {
        type: "enum",
        enum: ["positive", "neutral", "negative"],
      },
    },
    {
      headers: {
        Authorization: "Bearer YOUR_API_KEY",
      },
    }
  );

  // Access the enum result directly from output
  console.log(response.data.output);
  // Output: "positive"
}

Rate limits

The rate limit for the Agent Completion endpoint is 500 RPM (requests per minute) and 60.000 TPM (tokens per minute). Rate limits are defined at the workspace level - and not at an API key level. Each model has its own rate limit. If you exceed your rate limit, you will receive a 429 Too Many Requests response. Please note that the rate limits are subject to change, refer to this documentation for the most up-to-date information. In case you need a higher rate limit, please contact us at support@langdock.com.

Response Format

The API returns an object containing:
{
  // Standard message results - always present
  result: Array<{
    id: string;
    role: "tool" | "assistant";
    content: Array<{
      type: string;
      toolCallId?: string;
      toolName?: string;
      result?: object;
      args?: object;
      text?: string;
    }>;
  }>;

  // Structured output - included by default
  output?: object | array | string;
}

Standard Result

The result array contains the message exchange between user and assistant, including any tool calls that were made. This is always present in the response.

Structured Output

When the request includes an output parameter, the response will automatically include an output field containing the formatted structured data. The type of this field depends on the requested output format:
  • If output.type was “object”: Returns a JSON object (with schema validation if schema was provided)
  • If output.type was “array”: Returns an array of objects matching the provided schema
  • If output.type was “enum”: Returns a string matching one of the provided enum values
For example, when requesting weather data with structured output:
// Request
{
  "output": {
    "type": "array",
    "schema": {
      "type": "object",
      "properties": {
        "weather": {
          "type": "object",
          "properties": {
            "city": { "type": "string" },
            "tempInCelsius": { "type": "number" },
            "tempInFahrenheit": { "type": "number" }
          },
          "required": ["city", "tempInCelsius", "tempInFahrenheit"]
        }
      }
    }
  }
}

// Response
{
  "result": [
    // Full conversation including tool calls (e.g., web searches)
    { "role": "assistant", "content": [...], "id": "..." },
    { "role": "tool", "content": [...], "id": "..." },
    { "role": "assistant", "content": "...", "id": "..." }
  ],
  "output": [
    { "weather": { "city": "Paris", "tempInCelsius": 1, "tempInFahrenheit": 33 } },
    { "weather": { "city": "Berlin", "tempInCelsius": 1, "tempInFahrenheit": 35 } },
    { "weather": { "city": "London", "tempInCelsius": 7, "tempInFahrenheit": 45 } }
  ]
}
The output field is automatically populated with the formatted results based on the assistant’s response and your schema definition. You can use this directly in your application without parsing the full conversation in result.

Error Handling

try {
  const response = await axios.post('https://api.langdock.com/assistant/v1/chat/completions', ...);
} catch (error) {
  if (error.response) {
    switch (error.response.status) {
      case 400:
        console.error('Invalid parameters:', error.response.data.message);
        break;
      case 429:
        console.error('Rate limit exceeded');
        break;
      case 500:
        console.error('Server error');
        break;
    }
  }
}

Migrating to Agents API

The new Agents API offers improved compatibility with modern AI SDKs, including native support for the Vercel AI SDK. The main difference is in the chat completions endpoint format. See the equivalent endpoint in the Agents API:
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on API Key Best Practices.

Authorizations

Authorization
string
header
required

API key as Bearer token. Format "Bearer YOUR_API_KEY"

Body

application/json
assistantId
string
required

ID of an existing agent to use

messages
object[]
required
stream
boolean
default:false

Enable or disable streaming responses. When true, returns server-sent events. When false, returns complete JSON response.

Example:

true

output
object

Specification for structured output format. When type is object/array and no schema is provided, the response will be JSON but can have any structure. When the type is enum, you must provide an enum parameter with an array of strings as options.

maxSteps
integer
default:10

Maximum number of steps the agent can take during the conversation

Required range: 1 <= x <= 20

Response

Successful chat completion

result
object[]
required
output

Present when output parameter was specified in the request