OpenAI Chat completion
Creates a model response for the given chat conversation using an OpenAI model.
In dedicated deployments, api.langdock.com maps to <Base URL>/api/public
Creates a model response for the given chat conversation. This endpoint follows the OpenAI API specification and the requests are sent to the Azure OpenAI endpoint.
To use the API you need an API key. Admins can create API keys in the settings.
All parameters from the OpenAI Chat Completion endpoint are supported according to the OpenAI specifications, with the following exceptions:
-
model
: Currently only theo3-mini
,o1-preview
gpt-4o
,gpt-4o-mini
,gpt-4
andgpt-35-turbo
models are supported.- The list of available models might differ if you are using your own API keys in Langdock (“Bring-your-own-keys / BYOK”, see here for details). In this case, please reach out to your admin to understand which models are available in the API.
-
n
: Not supported. -
service_tier
: Not supported. -
parallel_tool_calls
: Not supported. -
stream_options
: Not supported.
Rate limits
The rate limit for the Chat Completion endpoint is 500 RPM (requests per minute) and 60.000 TPM (tokens per minute). Rate limits are defined at the workspace level - and not at an API key level. Each model has its own rate limit. If you exceed your rate limit, you will receive a 429 Too Many Requests
response.
Please note that the rate limits are subject to change, refer to this documentation for the most up-to-date information. In case you need a higher rate limit, please contact us at support@langdock.com.
Using OpenAI-compatible libraries
As the request and response format is the same as the OpenAI API, you can use popular libraries like the OpenAI Python library or the Vercel AI SDK to use the Langdock API.
Example using the OpenAI Python library
Example using the Vercel AI SDK in Node.js
Headers
API key as Bearer token. Format "Bearer YOUR_API_KEY"
Path Parameters
The region of the API to use.
eu
, us
Body
Response
Represents a chat completion response returned by model, based on the provided input.