POST
/
anthropic
/
{region}
/
v1
/
messages
Create a Message
curl --request POST \
  --url https://api.langdock.com/anthropic/{region}/v1/messages \
  --header 'Authorization: <authorization>' \
  --header 'Content-Type: application/json' \
  --data '{
  "max_tokens": 1024,
  "messages": [
    {
      "content": "Write a haiku about cats.",
      "role": "user"
    }
  ],
  "model": "claude-3-haiku-20240307"
}'
[
  {
    "content": [
      {
        "text": "Here is a haiku about cats:\\n\\nFeline grace and charm,\\nPurring softly by the fire,\\nCats reign supreme.",
        "type": "text"
      }
    ],
    "id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
    "model": "claude-3-haiku-20240307",
    "role": "assistant",
    "stop_reason": "end_turn",
    "stop_sequence": null,
    "type": "message",
    "usage": {
      "input_tokens": 14,
      "output_tokens": 35
    }
  }
]
Erstellt eine Modellantwort für die gegebene Chat-Konversation. Dieser Endpunkt folgt der Anthropic API-Spezifikation und die Anfragen werden an den AWS Bedrock Anthropic-Endpunkt gesendet.
Um die API zu nutzen, benötigst du einen API-Schlüssel. Administratoren können API-Schlüssel in den Einstellungen erstellen.
Alle Parameter vom Anthropic “Create a message” Endpunkt werden gemäß den Anthropic-Spezifikationen unterstützt, mit folgender Ausnahme:
  • model: Die unterstützten Modelle hängen von der Region ab, derzeit werden in der EU folgende Modelle unterstützt: claude-3-7-sonnet-20250219, claude-3-5-sonnet-20240620, claude-3-sonnet-20240229, claude-3-haiku-20240307 und in den USA werden folgende Modelle unterstützt: claude-3-5-sonnet-20240620, claude-3-haiku-20240307, claude-3-opus-20240229.
  • Die Liste der verfügbaren Modelle kann abweichen, wenn du deine eigenen API-Schlüssel in Langdock verwendest (“Bring-your-own-keys / BYOK”, siehe hier für Details). In diesem Fall wende dich bitte an deinen Administrator, um zu verstehen, welche Modelle in der API verfügbar sind.

Rate Limits

Die Rate Limit für den Messages-Endpunkt beträgt 500 RPM (Anfragen pro Minute) und 60.000 TPM (Token pro Minute). Rate Limits werden auf Workspace-Ebene definiert - und nicht auf API-Schlüssel-Ebene. Jedes Modell hat seine eigene Rate Limit. Wenn du deine Rate Limit überschreitest, erhältst du eine 429 Too Many Requests Antwort. Bitte beachte, dass die Rate Limits Änderungen unterliegen können. Beziehe dich auf diese Dokumentation für die aktuellsten Informationen. Falls du eine höhere Rate Limit benötigst, kontaktiere uns bitte unter support@langdock.com.

Verwendung von Anthropic-kompatiblen Bibliotheken

Da das Anfrage- und Antwortformat dasselbe wie bei der Anthropic-API ist, kannst du beliebte Bibliotheken wie die Anthropic Python-Bibliothek oder das Vercel AI SDK verwenden, um die Langdock-API zu nutzen.

Beispiel mit der Anthropic Python-Bibliothek

from anthropic import Anthropic
client = Anthropic(
  base_url="https://api.langdock.com/anthropic/eu/",
  api_key="<YOUR_LANGDOCK_API_KEY>"
)

message = client.messages.create(
	model="claude-3-haiku-20240307",
	messages=[
			{ "role": "user", "content": "Write a haiku about cats" }
	],
	max_tokens=1024,
)

print(message.content[0].text)

Beispiel mit dem Vercel AI SDK in Node.js

import { generateText } from "ai";
import { createAnthropic } from "@ai-sdk/anthropic";

const langdockProvider = createAnthropic({
  baseURL: "https://api.langdock.com/anthropic/eu/v1",
  apiKey: "<YOUR_LANGDOCK_API_KEY>",
});

const result = await generateText({
  model: langdockProvider("claude-3-haiku-20240307"),
  prompt: "Write a haiku about cats",
});

console.log(result.text);

Headers

Authorization
string
required

API key as Bearer token. Format "Bearer YOUR_API_KEY"

Path Parameters

region
enum<string>
required

The region of the API to use.

Available options:
eu,
us

Body

application/json
model
enum<string>
required

The model that will complete your prompt. See models for additional details and options.

Available options:
claude-3-7-sonnet-20250219,
claude-3-5-sonnet-20240620,
claude-3-opus-20240229,
claude-3-haiku-20240307
messages
InputMessage · object[]
required

Input messages.

Anthropic's models are trained to operate on alternating user and assistant conversational turns. When creating a new Message, you specify the prior conversational turns with the messages parameter, and the model then generates the next Message in the conversation.

Each input message must be an object with a role and content. You can specify a single user-role message, or you can include multiple user and assistant messages. The first message must always use the user role.

If the final message uses the assistant role, the response content will continue immediately from the content in that message. This can be used to constrain part of the model's response.

Example with a single user message:

[{"role": "user", "content": "Hello, Claude"}]

Example with multiple conversational turns:

[
{"role": "user", "content": "Hello there."},
{"role": "assistant", "content": "Hi, I'm Claude. How can I help you?"},
{"role": "user", "content": "Can you explain LLMs in plain English?"},
]

Example with a partially-filled response from Claude:

[
{"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
{"role": "assistant", "content": "The best answer is ("},
]

Each input message content may be either a single string or an array of content blocks, where each block has a specific type. Using a string for content is shorthand for an array of one content block of type "text". The following input messages are equivalent:

{"role": "user", "content": "Hello, Claude"}
{"role": "user", "content": [{"type": "text", "text": "Hello, Claude"}]}

Starting with Claude 3 models, you can also send image content blocks:

{"role": "user", "content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/jpeg",
"data": "/9j/4AAQSkZJRg...",
}
},
{"type": "text", "text": "What is in this image?"}
]}

We currently support the base64 source type for images, and the image/jpeg, image/png, image/gif, and image/webp media types.

See examples for more input examples.

Note that if you want to include a system prompt, you can use the top-level system parameter â€" there is no "system" role for input messages in the Messages API.

max_tokens
integer
required

The maximum number of tokens to generate before stopping.

Note that Anthropic's models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.

Different models have different maximum values for this parameter. See models for details.

Required range: x >= 1
Example:
[1024]
stop_sequences
string[]

Custom text sequences that will cause the model to stop generating.

Anthropic's models will normally stop when they have naturally completed their turn, which will result in a response stop_reason of "end_turn".

If you want the model to stop generating when it encounters custom strings of text, you can use the stop_sequences parameter. If the model encounters one of the custom sequences, the response stop_reason value will be "stop_sequence" and the response stop_sequence value will contain the matched stop sequence.

stream
boolean

Whether to incrementally stream the response using server-sent events.

See streaming for details.

system

System prompt.

A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role. See Anthropic's guide to system prompts.

Example:
[
[
{
"text": "Today's date is 2024-06-01.",
"type": "text"
}
],
"Today's date is 2023-01-01."
]
temperature
number

Amount of randomness injected into the response.

Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks.

Note that even with temperature of 0.0, the results will not be fully deterministic.

Required range: 0 <= x <= 1
Example:
[1]
tool_choice
object

How the model should use the provided tools. The model can use a specific tool, any available tool, or decide by itself. The model will automatically decide whether to use tools.

tools
Tool · object[]

Definitions of tools that the model may use.

If you include tools in your API request, the model may return tool_use content blocks that represent the model's use of those tools. You can then run those tools using the tool input generated by the model and then optionally return results back to the model using tool_result content blocks.

Each tool definition includes:

  • name: Name of the tool.
  • description: Optional, but strongly-recommended description of the tool.
  • input_schema: JSON schema for the tool input shape that the model will produce in tool_use output content blocks.

For example, if you defined tools as:

[
{
"name": "get_stock_price",
"description": "Get the current stock price for a given ticker symbol.",
"input_schema": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
}
},
"required": ["ticker"]
}
}
]

And then asked the model "What's the S&P 500 at today?", the model might produce tool_use content blocks in the response like this:

[
{
"type": "tool_use",
"id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"name": "get_stock_price",
"input": { "ticker": "^GSPC" }
}
]

You might then run your get_stock_price tool with {"ticker": "^GSPC"} as an input, and return the following back to the model in a subsequent user message:

[
{
"type": "tool_result",
"tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"content": "259.75 USD"
}
]

Tools can be used for workflows that include running client-side tools and functions, or more generally whenever you want the model to produce a particular JSON structure of output.

See Anthropic's guide for more details.

Example:
[
{
"description": "Get the current weather in a given location",
"input_schema": {
"properties": {
"location": {
"description": "The city and state, e.g. San Francisco, CA",
"type": "string"
},
"unit": {
"description": "Unit for the output - one of (celsius, fahrenheit)",
"type": "string"
}
},
"required": ["location"],
"type": "object"
},
"name": "get_weather"
}
]
top_k
integer

Only sample from the top K options for each subsequent token.

Used to remove "long tail" low probability responses. Learn more technical details here.

Recommended for advanced use cases only. You usually only need to use temperature.

Required range: x >= 0
Example:
[5]
top_p
number

Use nucleus sampling.

In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both.

Recommended for advanced use cases only. You usually only need to use temperature.

Required range: 0 <= x <= 1
Example:
[0.7]

Response

Message object.

id
string
required

Unique object identifier.

The format and length of IDs may change over time.

Example:
["msg_013Zva2CMHLNnXjNJJKqJ2EF"]
type
enum<string>
default:message
required

Object type.

For Messages, this is always "message".

Available options:
message
role
enum<string>
default:assistant
required

Conversational role of the generated message.

This will always be "assistant".

Available options:
assistant
content
Content · array
required

Content generated by the model.

This is an array of content blocks, each of which has a type that determines its shape.

Example:

[{"type": "text", "text": "Hi, I'm Claude."}]

If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.

For example, if the input messages were:

[
{"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
{"role": "assistant", "content": "The best answer is ("}
]

Then the response content might be:

[{"type": "text", "text": "B)"}]
Example:
[
[
{
"text": "Hi! My name is Claude.",
"type": "text"
}
]
]
model
enum<string>
required

The model that will complete your prompt. See models for additional details and options.

Available options:
claude-3-7-sonnet-20250219,
claude-3-5-sonnet-20240620,
claude-3-opus-20240229,
claude-3-haiku-20240307
stop_reason
enum<string>
required

The reason that we stopped.

This may be one the following values:

  • "end_turn": the model reached a natural stopping point
  • "max_tokens": we exceeded the requested max_tokens or the model's maximum
  • "stop_sequence": one of your provided custom stop_sequences was generated
  • "tool_use": the model invoked one or more tools

In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.

Available options:
end_turn,
max_tokens,
stop_sequence,
tool_use
stop_sequence
string
required

Which custom stop sequence was generated, if any.

This value will be a non-null string if one of your custom stop sequences was generated.

usage
object
required

Input and output token counts, representing the underlying cost to our systems.

Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.

For example, output_tokens will be non-zero, even for an empty string response from Claude.

Example:
[
{
"input_tokens": 2095,
"output_tokens": 503
}
]