OpenAI Embeddings
Creates embeddings for text using OpenAI’s embedding models
In dedicated deployments, api.langdock.com maps to <Base URL>/api/public
Creates embeddings for text using OpenAI’s embedding models. This endpoint follows the OpenAI API specification and the requests are sent to the Azure OpenAI endpoint.
To use the API you need an API key. Admins can create API keys in the settings.
All parameters from the OpenAI Embeddings endpoint are supported according to the OpenAI specifications, with the following exceptions:
model
: Currently only thetext-embedding-ada-002
model is supported.encoding_format
: Supports bothfloat
andbase64
formats.
Rate limits
The rate limit for the Embeddings endpoint is 500 RPM (requests per minute) and 60.000 TPM (tokens per minute). Rate limits are defined at the workspace level - and not at an API key level. If you exceed your rate limit, you will receive a 429 Too Many Requests
response.
Please note that the rate limits are subject to change, refer to this documentation for the most up-to-date information. In case you need a higher rate limit, please contact us at support@langdock.com.
Using OpenAI-compatible libraries
As the request and response format is the same as the OpenAI API, you can use popular libraries like the OpenAI Python library or the Vercel AI SDK to use the Langdock API.
Example using the OpenAI Python library
Example using the Vercel AI SDK in Node.js
Headers
API key as Bearer token. Format "Bearer YOUR_API_KEY"
Path Parameters
The region of the API to use.
eu
, us
Body
Response
The response is of type object
.