POST
/
completion
curl --request POST \
  --url https://engine.langdock.com/api/completion \
  --header 'Content-Type: application/json' \
  --data '{
  "query": "<string>",
  "provider": "openai",
  "model": "gpt-3.5-turbo",
  "region": "eu",
  "temperature": 123
}'
{
  "message": "<string>",
  "model": {
    "provider": "<string>",
    "model": "<string>",
    "region": "<string>"
  },
  "usage": {
    "promptTokens": 123,
    "completionTokens": 123
  }
}

The completions endpoint allows you to complete a prompt with the selected model. To configure the model, you can use the model as well as the provider parameter. More models will be added over time.

To use the API you need an API key. To request access, please contact us at support@langdock.com

Body

application/json
query
string

The prompt to be completed.

provider
enum<string>

The provider of the model to use for completion. Defaults to openai.

Available options:
openai,
anthropic
model
enum<string>

The name of the model to use for completion. Defaults to gpt-3.5-turbo.

Available options:
gpt-3.5-turbo,
gpt-4,
claude-2,
claude-instant-1,
claude-3-haiku,
claude-3-sonnet,
claude-3-opus
region
enum<string>

The region of the model to use for completion. Defaults to eu.

Available options:
eu,
us
temperature
number

The sampling temperature to use for completion. The allowed range is between 0 and 1. Defaults to 0.7.

Response

200 - application/json
message
string

The completion performed by the model.

model
object

The user name.

usage
object

Contains relevant information about the consumption of the prompt.