Skip to main content
The more context and details you add, the better your response because the model understands precisely what you expect. Do not miss our Prompt Engineering Guide to learn how to write great prompts.
The image models currently available in Langdock include Flux1.1 Pro Ultra and Flux.1 Kontext from our partner Black Forest Labs. Additionally, you can access Imagen 4, Imagen 4 Fast, and Gemini 2.5 Flash Image (Nano Banana) from Google, as well as DALL-E 3 and GPT Image 1 from OpenAI. Image generation uses the following steps:
  1. You can use image generation in Langdock via the “Image”-button in the chat field. This will use the default image model. You can also select a different model using the selector in the button.
  2. The chat model will then choose the image generation tool and writes a prompt to the image model in the background.
  3. The image model generates the image based on the prompt and returns it to the main model and you as the user.
You can select any language model for image generation. Each model sends prompts to the underlying image generation model differently, so feel free to try different models and see how the generated images differ. Here’s a known limitation we’re working on:
  • Text in images has mistakes / is written in non-existing letters: This happens because the models are trained on real images that included text. The model generates objects that look similar to what it learned, but it can’t write full, correct sentences yet. Instead, it tries to mimic letters from the alphabet, leading to incorrect spelling or non-existing letters. This is a current limitation of image generation models that OpenAI is actively improving in upcoming versions.
I