Code-Generierung mit dem Codestral-Modell von Mistral.
429 Too Many Requests Antwort.
Bitte beachte, dass die Rate Limits Änderungen unterliegen. Beziehe dich auf diese Dokumentation für die aktuellsten Informationen.
Falls du eine höhere Rate Limit benötigst, kontaktiere uns bitte unter [email protected].
~/.continue/config.json (MacOS / Linux) oder %USERPROFILE%\.continue\config.json (Windows) bearbeiten.
Nachfolgend findest du ein Beispiel-Setup für die Verwendung von Continue mit dem Codestral-Modell für Autovervollständigung und Claude 3.5 Sonnet und GPT-4o-Modellen für Chats und Bearbeitungen, die alle über die Langdock API bereitgestellt werden.
API key as Bearer token. Format "Bearer YOUR_API_KEY"
API key as Bearer token. Format "Bearer YOUR_API_KEY"
The region of the API to use.
eu ID of the model to use. Only compatible for now with:
codestral-2501The text/code to complete.
What sampling temperature to use, we recommend between 0.0 and 0.7. Higher values like 0.7 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. The default value varies depending on the model you are targeting. Call the /models endpoint to retrieve the appropriate value.
0 <= x <= 1.5Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
0 <= x <= 1The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length.
x >= 0Whether to stream back partial progress. If set, tokens will be sent as data-only server-side events as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON.
Stop generation if this token is detected. Or if one of these tokens is detected when providing an array
The seed to use for random sampling. If set, different calls will generate deterministic results.
x >= 0Optional text/code that adds more context for the model. When given a prompt and a suffix the model will fill what is between them. When suffix is not provided, the model will simply execute completion starting with prompt.
The minimum number of tokens to generate in the completion.
x >= 0