There are two ways how contents of a file are processed for generating an answer:Documentation Index
Fetch the complete documentation index at: https://docs.langdock.com/llms.txt
Use this file to discover all available pages before exploring further.
- One is that the entire document is sent to the model, together with your prompt (see this guide). This is the standard in chats and agents.
- AI models have a context window, which is the limit of how much text can be processed at once. For long documents or a large number of documents, the documents are split into chunks and a semantic search retrieves only the most relevant sections, which are then sent to the model in the context window. This is used in folders.