/completion
curl --request POST \
--url https://engine.langdock.com/api/completion \
--header 'Content-Type: application/json' \
--data '{
"model": "gpt-3.5-turbo",
"provider": "openai",
"query": "<string>",
"region": "eu",
"temperature": 123
}'
{
"message": "<string>",
"model": {
"model": "<string>",
"provider": "<string>",
"region": "<string>"
},
"usage": {
"completionTokens": 123,
"promptTokens": 123
}
}
The completions endpoint allows you to complete a prompt with the selected model. To configure the model, you can use the model
as well as the provider
parameter. More models will be added over time.
To use the API you need an API key. To request access, please contact us at support@langdock.com
Body
The name of the model to use for completion. Defaults to gpt-3.5-turbo
.
gpt-3.5-turbo
, gpt-4
, llama-2-70b-chat
, codellama-34b-instruct
, codellama-13b-instruct
, codellama-7b-instruct
, claude-2
, claude-1
, claude-instant-1
The provider of the model to use for completion. Defaults to openai
.
openai
, meta
, anthropic
The prompt to be completed.
The region of the model to use for completion. Defaults to eu
.
eu
, us
The sampling temperature to use for completion. The allowed range is between 0
and 1
. Defaults to 0.7
.
Response
The completion performed by the model.
The user name.
Contains relevant information about the consumption of the prompt.
Was this page helpful?
curl --request POST \
--url https://engine.langdock.com/api/completion \
--header 'Content-Type: application/json' \
--data '{
"model": "gpt-3.5-turbo",
"provider": "openai",
"query": "<string>",
"region": "eu",
"temperature": 123
}'
{
"message": "<string>",
"model": {
"model": "<string>",
"provider": "<string>",
"region": "<string>"
},
"usage": {
"completionTokens": 123,
"promptTokens": 123
}
}