Codestral
Code generation using the Codestral model from Mistral.
Creates a code completion using the Codestral model from Mistral.
All parameters from the Mistral fill-in-the-middle Completion endpoint are supported according to the Mistral specifications.
Rate limits
The rate limit for the FIM Completion endpoint is 500 RPM (requests per minute) and 60.000 TPM (tokens per minute). Rate limits are defined at the workspace level - and not at an API key level. Each model has its own rate limit. If you exceed your rate limit, you will receive a 429 Too Many Requests
response.
Please note that the rate limits are subject to change, refer to this documentation for the most up-to-date information. In case you need a higher rate limit, please contact us at support@langdock.com.
Using the Continue AI Code Assistant
Using the Codestral model, combined with chat completion models from the Langdock API, makes it possible to use the open-source AI code assistant Continue (continue.dev) fully via the Langdock API.
Continue is available as a VS Code extension and as a JetBrains extension. To customize the models used by Continue, you can edit the configuration file at ~/.continue/config.json
(MacOS / Linux) or %USERPROFILE%\.continue\config.json
(Windows).
Below is an example setup for using Continue with the Codestral model for autocomplete and Claude 3.5 Sonnet and GPT-4o models for chats and edits, all served from the Langdock API.
Headers
API key as Bearer token. Format "Bearer YOUR_API_KEY"
Path Parameters
The region of the API to use.
eu
Body
Response
Successful Response
The response is of type object
.