POST
/
mistral
/
{region}
/
v1
/
fim
/
completions
curl --request POST \
  --url https://api.langdock.com/mistral/{region}/v1/fim/completions \
  --header 'Authorization: <authorization>' \
  --header 'Content-Type: application/json' \
  --data '{
  "model": "codestral-2405",
  "prompt": "function removeSpecialCharactersWithRegex(str: string) {",
  "max_tokens": 64
}'
{
  "data": "asd",
  "id": "245c52bc936f53ba90327800c73d1c3e",
  "object": "chat.completion",
  "model": "codestral",
  "usage": {
    "prompt_tokens": 16,
    "completion_tokens": 102,
    "total_tokens": 118
  },
  "created": 1732902806,
  "choices": [
    {
      "index": 0,
      "message": {
        "content": "\n  // Use a regular expression to match any non-alphanumeric character and replace it with an empty string\n  return str.replace(/[^a-zA-Z0-9]/g, '');\n}\n\n// Test the function\nconst inputString = \"Hello, World! 123\";\nconst outputString = removeSpecialCharactersWithRegex(inputString);\nconsole.log(outputString); // Output: \"HelloWorld123\"",
        "prefix": false,
        "role": "assistant"
      },
      "finish_reason": "stop"
    }
  ]
}

Creates a code completion using the Codestral model from Mistral.

All parameters from the Mistral fill-in-the-middle Completion endpoint are supported according to the Mistral specifications.

Rate limits

The rate limit for the FIM Completion endpoint is 500 RPM (requests per minute) and 60.000 TPM (tokens per minute). Rate limits are defined at the workspace level - and not at an API key level. Each model has its own rate limit. If you exceed your rate limit, you will receive a 429 Too Many Requests response.

Please note that the rate limits are subject to change, refer to this documentation for the most up-to-date information. In case you need a higher rate limit, please contact us at support@langdock.com.

Using the Continue AI Code Assistant

Using the Codestral model, combined with chat completion models from the Langdock API, makes it possible to use the open-source AI code assistant Continue (continue.dev) fully via the Langdock API.

Continue is available as a VS Code extension and as a JetBrains extension. To customize the models used by Continue, you can edit the configuration file at ~/.continue/config.json (MacOS / Linux) or %USERPROFILE%\.continue\config.json (Windows).

Below is an example setup for using Continue with the Codestral model for autocomplete and Claude 3.5 Sonnet and GPT-4o models for chats and edits, all served from the Langdock API.

{
  "models": [
    {
      "title": "GPT-4o",
      "provider": "openai",
      "model": "gpt-4o",
      "apiKey": "<YOUR_LANGDOCK_API_KEY>",
      "apiBase": "https://api.langdock.com/openai/eu/v1"
    },
    {
      "title": "Claude 3.5 Sonnet",
      "provider": "anthropic",
      "model": "claude-3-5-sonnet-20240620",
      "apiKey": "<YOUR_LANGDOCK_API_KEY>",
      "apiBase": "https://api.langdock.com/anthropic/eu/v1"
    }
  ],
  "tabAutocompleteModel": {
    "title": "Codestral",
    "provider": "mistral",
    "model": "codestral-2405",
    "apiKey": "<YOUR_LANGDOCK_API_KEY>",
    "apiBase": "https://api.langdock.com/mistral/eu/v1"
  }
  /* ... other configuration ... */
}

Headers

Authorization
string
required

API key as Bearer token. Format "Bearer YOUR_API_KEY"

Path Parameters

region
enum<string>
required

The region of the API to use.

Available options:
eu

Body

application/json

Response

200
application/json

Successful Response

The response is of type object.