In dedicated deployments, api.langdock.com maps to <Base URL>/api/public
To use the API you need an API key. Admins can create API keys in the
settings.
model: Currently only thetext-embedding-ada-002model is supported.encoding_format: Supports bothfloatandbase64formats.
Rate limits
The rate limit for the Embeddings endpoint is 500 RPM (requests per minute) and 60.000 TPM (tokens per minute). Rate limits are defined at the workspace level - and not at an API key level. If you exceed your rate limit, you will receive a429 Too Many Requests response.
Please note that the rate limits are subject to change, refer to this documentation for the most up-to-date information.
In case you need a higher rate limit, please contact us at [email protected].
Using OpenAI-compatible libraries
As the request and response format is the same as the OpenAI API, you can use popular libraries like the OpenAI Python library or the Vercel AI SDK to use the Langdock API.Example using the OpenAI Python library
Example using the Vercel AI SDK in Node.js
Headers
Path Parameters
Available options:
eu, us Body
application/json