# Airtable
Source: https://docs.langdock.com/administration/integrations/airtable
Database solution combining spreadsheet simplicity with powerful database capabilities
## Overview
Airtable is a cloud-based database platform that combines the simplicity of a spreadsheet with the power of a database. Through Langdock's integration, you can manage records, schemas, and bases directly from your conversations.
**Authentication:** OAuth\
**Category:** Productivity & Collaboration\
**Availability:** All workspace plans
## Available Actions
### List Bases
##### `airtable.listBases`
Lists all available bases in your Airtable workspace.
**Requires Confirmation:** No\
**Parameters:** None
**Example Response:**
```
// JSON example removed for MDX compatibility
```
***
### Get Base Schema
##### `airtable.getBaseSchema`
Returns the complete schema of tables in the specified base, including field types and relationships.
**Requires Confirmation:** No
**Parameters:**
* `baseId` (string, required): The unique identifier of the base
**Example Usage:**
> "Get the schema for base appXXXXXXXXXXXXXX"
**Example Response:**
```
// JSON example removed for MDX compatibility
```
***
### Find Records
##### `airtable.findRecords`
Searches for records in a table with optional filtering.
**Requires Confirmation:** No
**Parameters:**
* `baseId` (string, required): The unique identifier of the base
* `tableIdOrName` (string, required): The table ID or name
* `filterByFormula` (string, optional): Airtable formula to filter records
* `maxRecords` (number, optional): Maximum number of records to return
**Filter Formula Examples:**
* Status equals 'Active': `"Status = 'Active'"`
* Age greater than 25: `"Age > 25"`
* Date after specific date: `"IS_AFTER(CreatedDate, '2023-01-01')"`
* Text search: `"FIND('urgent', Description)"`
* Multiple conditions: `"AND(Status = 'Active', Priority = 'High')"`
* Check non-empty field: `"NOT({Email} = '')"`
**Example Usage:**
> "Find all active products in the Products table where price is greater than 100"
***
### Create Records
##### `airtable.createRecords`
Creates new records in a specified table.
**Requires Confirmation:** Yes
**Parameters:**
* `baseId` (string, required): The unique identifier of the base
* `tableIdOrName` (string, required): The table ID or name
* `records` (object, required): The records to create
**Example Request:**
```
// JSON example removed for MDX compatibility
```
**Example Usage:**
> "Create a new product called 'Premium Widget' with price \$199.99 in the Products table"
***
### Update Records
##### `airtable.updateRecords`
Updates existing records in a table.
**Requires Confirmation:** Yes
**Parameters:**
* `baseId` (string, required): The unique identifier of the base
* `tableIdOrName` (string, required): The table ID or name
* `recordId` (string, required): The unique identifier of the record to update
* `records` (object, required): The fields to update
**Example Request:**
```
// JSON example removed for MDX compatibility
```
**Example Usage:**
> "Update the price of product recXXXXXXXXXXXXXX to \$149.99 and set status to Sale"
***
### Delete Record
##### `airtable.deleteRecord`
Deletes a specified record from a table.
**Requires Confirmation:** Yes
**Parameters:**
* `baseId` (string, required): The unique identifier of the base
* `tableIdOrName` (string, required): The table ID or name
* `recordId` (string, required): The unique identifier of the record to delete
**Example Usage:**
> "Delete record recXXXXXXXXXXXXXX from the Products table"
***
## Common Use Cases
* Create and update product catalogs
* Manage inventory records
* Track project tasks and milestones
* Add new customer records
* Update contact information
* Search for customer data
* Organize content calendars
* Track publication status
* Manage editorial workflows
* Filter records by criteria
* Export specific data sets
* Generate reports from bases
## Best Practices
**Performance Tips:**
* Use `maxRecords` parameter to limit large result sets
* Prefer table IDs over names for better reliability
* Use filter formulas to reduce data transfer
* Batch create/update operations when possible
**Important Considerations:**
* Record IDs are permanent and cannot be reused after deletion
* Base and table IDs are more reliable than names
* Filter formulas are case-sensitive
* API rate limits apply (5 requests per second per base)
## Workflow Examples
### Example 1: Product Inventory Update
```
1. List all bases to find your inventory base
2. Get the base schema to understand table structure
3. Find products with low stock using filterByFormula
4. Update stock levels for specific products
5. Create records for new incoming products
```
### Example 2: Customer Data Search
```
1. Connect to customer database base
2. Search for customers by email or name
3. Update customer status or information
4. Add notes or tags to customer records
```
## Troubleshooting
| Issue | Solution |
| --------------------- | -------------------------------------------- |
| "Base not found" | Verify the base ID using List Bases action |
| "Invalid formula" | Check formula syntax and field names |
| "Permission denied" | Ensure OAuth token has necessary permissions |
| "Rate limit exceeded" | Implement delays between requests |
## Related Integrations
* [Google Sheets](/administration/integrations/google-sheets) - For simpler spreadsheet operations
* [Notion](/administration/integrations/notion) - For wiki-style databases
* [Excel](/administration/integrations/excel) - For advanced spreadsheet analysis
## Support
For additional help with the Airtable integration, contact [support@langdock.com](mailto:support@langdock.com)
# Asana
Source: https://docs.langdock.com/administration/integrations/asana
Work management platform that helps teams plan, track, and deliver projects
## Overview
Asana is a comprehensive work management platform that helps teams organize, track, and manage their work. Through Langdock's integration, you can manage workspaces, projects, tasks, and teams directly from your conversations.
**Authentication:** OAuth\
**Category:** Productivity & Collaboration\
**Availability:** All workspace plans
## Available Actions
### Workspaces & Teams
#### List Workspaces
##### `asana.list_workspaces`
Get all accessible workspaces in your Asana account.
**Requires Confirmation:** Yes
**Parameters:**
* `limit` (number, optional): Limit number of results (1-100)
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "List all my Asana workspaces"
***
#### Get Teams for Workspace
##### `asana.get_teams_for_workspace`
List all teams in a specific workspace.
**Requires Confirmation:** Yes
**Parameters:**
* `workspaceGid` (string, required): Workspace GID
* `optFields` (string, optional): Comma-separated fields to include
* `limit` (number, optional): Maximum number of teams to return
**Example Usage:**
> "Show me all teams in workspace 12345"
***
#### Get Teams for User
##### `asana.get_teams_for_user`
Get teams a user belongs to.
**Requires Confirmation:** Yes
**Parameters:**
* `userGid` (string, required): User identifier ('me', email, or user GID)
* `organization` (string, required): Workspace or organization GID
* `optFields` (string, optional): Comma-separated fields to include
* `limit` (number, optional): Maximum number of teams to return
**Example Usage:**
> "Show me all teams that I'm part of in organization 12345"
***
### Users
#### Get Users
##### `asana.get_users`
List users across accessible workspaces; optionally filter by workspace or team.
**Requires Confirmation:** Yes
**Parameters:**
* `workspace` (string, optional): Workspace GID to filter users
* `team` (string, optional): Team GID to filter users
* `limit` (number, optional): Maximum number of users to return
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Show me all users in the Marketing team"
***
#### Get User
##### `asana.get_user`
Get user details by ID, email, or "me".
**Requires Confirmation:** Yes
**Parameters:**
* `userId` (string, optional): User identifier ('me', email, or user GID)
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Get information about the current user" or "Get information about user [john@company.com](mailto:john@company.com)"
***
### Goals
#### Get Goals
##### `asana.get_goals`
List goals filtered by context (portfolio, project, task, workspace, or team).
**Requires Confirmation:** Yes
**Parameters:**
* `workspace` (string, optional): Workspace GID to filter goals
* `team` (string, optional): Team GID to filter goals
* `project` (string, optional): Project GID to filter goals
* `portfolio` (string, optional): Portfolio GID to filter goals
* `task` (string, optional): Task GID to filter goals
* `timePeriod` (string, optional): Time period IDs to filter goals
* `isWorkspaceLevel` (boolean, optional): Filter to workspace-level goals
* `limit` (number, optional): Maximum number of goals to return
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Show me all goals for the Marketing team"
***
#### Get Goal
##### `asana.get_goal`
Get detailed goal data.
**Requires Confirmation:** Yes
**Parameters:**
* `goalGid` (string, required): Goal GID
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Get details for goal 12345"
***
#### Get Parent Goals for Goal
##### `asana.get_parent_goals_for_goal`
List all parent goals for a specific goal.
**Requires Confirmation:** Yes
**Parameters:**
* `goalGid` (string, required): Goal GID
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Show me all parent goals for goal 12345"
***
### Projects
#### Get Projects
##### `asana.get_projects`
List projects filtered by workspace.
**Requires Confirmation:** Yes
**Parameters:**
* `workspace` (string, required): Workspace GID
* `team` (string, optional): Filter projects on team id
* `archived` (boolean, optional): Include archived projects
* `limit` (number, optional): Maximum number of projects to return
* `optFields` (string, optional): Comma-separated fields to include. Example: `name,owner.name,workspace.name,team.name,archived,current_status.color`
**Example Usage:**
> "Show me all active projects in workspace 12345"
***
#### Get Projects for Workspace
##### `asana.get_projects_for_workspace`
Get ALL projects in a workspace across all teams.
**Requires Confirmation:** Yes
**Parameters:**
* `workspaceGid` (string, required): Workspace GID
* `archived` (boolean, optional): Filter projects by archived status
* `optFields` (string, optional): Comma-separated fields to include
* `limit` (number, optional): Maximum number of projects to return
**Example Usage:**
> "Get all projects in my marketing workspace"
***
#### Get Projects for Team
##### `asana.get_projects_for_team`
List all projects for a team.
**Requires Confirmation:** Yes
**Parameters:**
* `teamGid` (string, required): Team GID
* `archived` (boolean, optional): Filter projects by archived status
* `optFields` (string, optional): Comma-separated fields to include
* `limit` (number, optional): Maximum number of projects to return
**Example Usage:**
> "Show me all projects for the Engineering team"
***
#### Get Project
##### `asana.get_project`
Get detailed project data.
**Requires Confirmation:** Yes
**Parameters:**
* `projectId` (string, required): Project GID
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Get full details for project 12345"
***
#### Get Project Task Counts
##### `asana.get_project_task_counts`
Get task statistics for a project.
**Requires Confirmation:** Yes
**Parameters:**
* `projectId` (string, required): Project GID
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Get task counts for project 12345"
***
#### Create Project
##### `asana.create_project`
Create new project in Asana.
**Requires Confirmation:** Yes
**Parameters:**
* `name` (string, required): Name of the project
* `workspace` (string, optional): Workspace GID
* `team` (string, optional): Team GID
* `notes` (string, optional): Project description
* `htmlNotes` (string, optional): HTML-formatted description
* `owner` (string, optional): User identifier ('me', email, or user GID)
* `followers` (string, optional): Comma-separated list of user GIDs
* `color` (string, optional): Project color
* `dueDate` (string, optional): Due date (YYYY-MM-DD)
* `startDate` (string, optional): Start date (YYYY-MM-DD)
* `defaultView` (string, optional): Default view
* `public` (boolean, optional): Public to workspace
* `archived` (boolean, optional): Archived status
* `isTemplate` (boolean, optional): Is template
* `privacySetting` (string, optional): Privacy level
* `defaultAccessLevel` (string, optional): Default access level
* `minimumAccessLevelForSharing` (string, optional): Minimum access level for sharing
* `minimumAccessLevelForCustomization` (string, optional): Minimum access level for customization
* `customFields` (string, optional): JSON string of custom fields
* `icon` (string, optional): Project icon type
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Create a new project called 'Q1 Marketing Campaign' in the Marketing team"
***
### Tasks
#### Get Tasks
##### `asana.get_tasks`
List tasks filtered by context (workspace/project/tag/section/user list).
**Requires Confirmation:** Yes
**Parameters:**
* `workspace` (string, optional): Workspace GID
* `project` (string, optional): Project GID
* `tag` (string, optional): Tag GID
* `section` (string, optional): Section GID
* `assignee` (string, optional): User identifier ('me', email, or user GID)
* `completedSince` (string, optional): Filter tasks completed since date
* `modifiedSince` (string, optional): Filter tasks modified since date
* `userTaskList` (string, optional): User task list GID
* `limit` (number, optional): Number of tasks to return (1-100)
* `offset` (string, optional): Pagination offset token
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Show me all tasks assigned to me in the Q4 Planning project"
***
#### Search Tasks
##### `asana.search_tasks`
Advanced task search with multiple filters.
**Requires Confirmation:** Yes
**Parameters:**
* `workspace` (string, required): Workspace GID
* `text` (string, optional): Text to search for in task name or description
* `completed` (boolean, optional): Filter for completed or incomplete tasks
* `assigneeAny` (string, optional): Comma-separated list of user identifiers
* `assigneeNot` (string, optional): Comma-separated list of user identifiers to exclude
* `projectsAny` (string, optional): Comma-separated list of project IDs
* `projectsAll` (string, optional): Comma-separated list of project IDs (tasks must be in all)
* `projectsNot` (string, optional): Comma-separated list of project IDs to exclude
* `tagsAny` (string, optional): Comma-separated list of tag IDs
* `tagsAll` (string, optional): Comma-separated list of tag IDs (tasks must have all)
* `tagsNot` (string, optional): Comma-separated list of tag IDs to exclude
* `sectionsAny` (string, optional): Comma-separated list of section or column IDs
* `sectionsAll` (string, optional): Comma-separated list of section IDs (tasks must be in all)
* `sectionsNot` (string, optional): Comma-separated list of section IDs to exclude
* `portfoliosAny` (string, optional): Comma-separated list of portfolio IDs
* `teamsAny` (string, optional): Comma-separated list of team IDs
* `followersAny` (string, optional): Comma-separated list of user identifiers
* `followersNot` (string, optional): Comma-separated list of user identifiers to exclude
* `createdByAny` (string, optional): Comma-separated list of user identifiers
* `createdByNot` (string, optional): Comma-separated list of user identifiers to exclude
* `assignedByAny` (string, optional): Comma-separated list of user identifiers
* `assignedByNot` (string, optional): Comma-separated list of user identifiers to exclude
* `commentedOnByNot` (string, optional): Comma-separated list of user identifiers to exclude
* `likedByNot` (string, optional): Comma-separated list of user identifiers to exclude
* `hasAttachment` (boolean, optional): Filter to tasks with attachments
* `isBlocked` (boolean, optional): Filter to tasks with incomplete dependencies
* `isBlocking` (boolean, optional): Filter to incomplete tasks with dependents
* `isSubtask` (boolean, optional): Filter to subtasks
* `resourceSubtype` (string, optional): Filters results by task's resource\_subtype (e.g., milestone)
* `customFields` (string, optional): JSON string of custom field filters
* `dueOn` (string, optional): ISO 8601 date string or null for due date
* `dueOnAfter` (string, optional): ISO 8601 date string for due date after filter
* `dueOnBefore` (string, optional): ISO 8601 date string for due date before filter
* `dueAtAfter` (string, optional): ISO 8601 datetime string for due datetime after filter
* `dueAtBefore` (string, optional): ISO 8601 datetime string for due datetime before filter
* `startOn` (string, optional): ISO 8601 date string or null for start date
* `startOnAfter` (string, optional): ISO 8601 date string for start date after filter
* `startOnBefore` (string, optional): ISO 8601 date string for start date before filter
* `createdOn` (string, optional): ISO 8601 date string or null for creation date
* `createdOnAfter` (string, optional): ISO 8601 date string for creation date after filter
* `createdOnBefore` (string, optional): ISO 8601 date string for creation date before filter
* `createdAtAfter` (string, optional): ISO 8601 datetime string for creation datetime after filter
* `createdAtBefore` (string, optional): ISO 8601 datetime string for creation datetime before filter
* `modifiedOn` (string, optional): ISO 8601 date string or null for modified date
* `modifiedOnAfter` (string, optional): ISO 8601 date string for modified date after filter
* `modifiedOnBefore` (string, optional): ISO 8601 date string for modified date before filter
* `modifiedAtAfter` (string, optional): ISO 8601 datetime string for modified datetime after filter
* `modifiedAtBefore` (string, optional): ISO 8601 datetime string for modified datetime before filter
* `completedOn` (string, optional): ISO 8601 date string or null for completion date
* `completedOnAfter` (string, optional): ISO 8601 date string for completion date after filter
* `completedOnBefore` (string, optional): ISO 8601 date string for completion date before filter
* `completedAtAfter` (string, optional): ISO 8601 datetime string for completion datetime after filter
* `completedAtBefore` (string, optional): ISO 8601 datetime string for completion datetime before filter
* `sortBy` (string, optional): Field to sort by (e.g., 'due\_date', 'created\_at', 'completed\_at', 'likes', 'modified\_at'). Defaults to modified\_at
* `sortAscending` (boolean, optional): Sort in ascending order. Defaults to false
* `limit` (number, optional): Number of results to return (1-100)
* `optFields` (string, optional): Comma-separated fields to include
* `optPretty` (boolean, optional): Provides "pretty" output with line breaking and indentation
**Example Usage:**
> "Search for all tasks mentioning 'budget review' in the Finance workspace" or "Find all incomplete tasks assigned to me that are due this week"
***
#### Get Task
##### `asana.get_task`
Get full task details by ID.
**Requires Confirmation:** Yes
**Parameters:**
* `taskId` (string, required): Task GID
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Get full details for task 12345"
***
#### Create Task
##### `asana.create_task`
Create task in Asana with context.
**Requires Confirmation:** Yes
**Parameters:**
* `name` (string, required): Name of the task
* `workspace` (string, optional): Workspace GID
* `projectId` (string, optional): Project GID
* `parent` (string, optional): Parent task GID
* `assignee` (string, optional): User identifier ('me', email, or user GID)
* `followers` (string, optional): Comma-separated list of user identifiers
* `notes` (string, optional): Task description
* `htmlNotes` (string, optional): HTML-formatted description
* `completed` (boolean, optional): Mark as completed
* `dueOn` (string, optional): Due date (YYYY-MM-DD)
* `dueAt` (string, optional): Due date and time
* `startOn` (string, optional): Start date (YYYY-MM-DD)
* `startAt` (string, optional): Start date and time
* `assigneeSection` (string, optional): Section GID
* `resourceSubtype` (string, optional): Task type
* `approvalStatus` (string, optional): Approval status
* `customType` (string, optional): Custom type GID
* `customTypeStatusOption` (string, optional): Custom type status option GID
* `customFields` (string, optional): JSON string of custom fields
**Example Usage:**
> "Create a task 'Review Q4 budget' due on 2024-03-15 and assign it to [john@company.com](mailto:john@company.com)"
***
#### Update Task
##### `asana.update_task`
Update existing task properties.
**Requires Confirmation:** Yes
**Parameters:**
* `taskId` (string, required): Task GID
* `name` (string, optional): Task name
* `notes` (string, optional): Task description
* `assignee` (string, optional): User identifier ('me', email, or user GID)
* `dueOn` (string, optional): Due date (YYYY-MM-DD)
* `dueAt` (string, optional): Due date and time
* `completed` (boolean, optional): Mark as completed
* `customFields` (string, optional): JSON string of custom fields
**Example Usage:**
> "Mark task 123456 as completed and update the notes"
***
#### Set Parent for Task
##### `asana.set_parent_for_task`
Change task parent (convert to/from subtask).
**Requires Confirmation:** Yes
**Parameters:**
* `taskId` (string, required): Task GID
* `parent` (string, required): Parent task GID
* `insertBefore` (string, optional): Insert before task GID
* `insertAfter` (string, optional): Insert after task GID
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Make task 12345 a subtask of task 67890"
***
#### Set Task Dependencies
##### `asana.set_task_dependencies`
Set tasks this task depends on (prerequisites).
**Requires Confirmation:** Yes
**Parameters:**
* `taskId` (string, required): Task GID
* `dependencies` (object, required): Array of task GIDs
**Example Usage:**
> "Set task 12345 to depend on tasks 67890 and 11111"
***
#### Set Task Dependents
##### `asana.set_task_dependents`
Set tasks blocked by this task (tasks waiting on this one).
**Requires Confirmation:** Yes
**Parameters:**
* `taskId` (string, required): Task GID
* `dependents` (object, required): Array of task GIDs
**Example Usage:**
> "Set tasks 67890 and 11111 to wait on task 12345"
***
#### Add Task Followers
##### `asana.add_task_followers`
Add followers to task (team members to notify of updates).
**Requires Confirmation:** Yes
**Parameters:**
* `taskId` (string, required): Task GID
* `followers` (string, required): Comma-separated list of user identifiers
**Example Usage:**
> "Add [john@company.com](mailto:john@company.com) and [jane@company.com](mailto:jane@company.com) as followers to task 12345"
***
#### Remove Task Followers
##### `asana.remove_task_followers`
Remove followers from task (stop notification subscriptions).
**Requires Confirmation:** Yes
**Parameters:**
* `taskId` (string, required): Task GID
* `followers` (string, required): Comma-separated list of user identifiers
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Remove [john@company.com](mailto:john@company.com) from following task 12345"
***
### Stories & Comments
#### Get Stories for Task
##### `asana.get_stories_for_task`
Get task activity history (comments, status changes, system events).
**Requires Confirmation:** Yes
**Parameters:**
* `taskId` (string, required): Task GID
* `limit` (number, optional): Number of stories to return (1-100)
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Show me all comments and activity for task 12345"
***
#### Get Story
##### `asana.get_story`
Get a single story by GID.
**Requires Confirmation:** Yes
**Parameters:**
* `storyGid` (string, required): Story GID
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Get details for story 12345"
***
#### Create Task Story
##### `asana.create_task_story`
Add explicit comment to task.
**Requires Confirmation:** Yes
**Parameters:**
* `taskId` (string, required): Task GID
* `text` (string, required): Comment text
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Add a comment to task 12345 saying 'This looks good to me'"
***
### Tags
#### Get Tags in Workspace
##### `asana.get_tags_in_workspace`
Returns compact tag records filtered by workspace.
**Requires Confirmation:** Yes
**Parameters:**
* `workspaceGid` (string, required): Workspace GID
* `optFields` (string, optional): Comma-separated fields to include
* `limit` (number, optional): Maximum number of tags to return
**Example Usage:**
> "Show all tags in the Marketing workspace"
***
### Attachments
#### Get Attachments for Object
##### `asana.get_attachments_for_object`
List attachment ids and metadata for a project, task, or project brief (no download).
**Requires Confirmation:** Yes
**Parameters:**
* `parent` (string, required): Project or task GID
* `optFields` (string, optional): Comma-separated fields to include
* `limit` (number, optional): Maximum number of attachments to return
**Example Usage:**
> "List all attachments for task 12345"
***
#### Download Attachment
##### `asana.download_attachment`
Get detailed attachment data including name, resource type, download\_url, view\_url, and parent.
**Requires Confirmation:** Yes
**Parameters:**
* `attachmentGid` (string, required): Attachment GID
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Get download URL for attachment 12345"
***
#### Download Attachments for Object
##### `asana.download_attachments_for_object`
Download all attachments for a project, task, or project brief.
**Requires Confirmation:** Yes
**Parameters:**
* `parent` (string, required): Project or task GID
* `optFields` (string, optional): Comma-separated fields to include
* `limit` (number, optional): Maximum number of attachments to return
**Example Usage:**
> "Download all attachments for project 12345"
***
### Portfolios
#### Get Portfolios
##### `asana.get_portfolios`
List portfolios filtered by workspace, owner, or team.
**Requires Confirmation:** Yes
**Parameters:**
* `owner` (string, required): User identifier ('me' or user GID)
* `workspace` (string, optional): Workspace GID
* `team` (string, optional): Team GID
* `optFields` (string, optional): Comma-separated fields to include
* `limit` (number, optional): Maximum number of portfolios to return
**Example Usage:**
> "Show me all portfolios I own in workspace 12345"
***
#### Get Portfolio
##### `asana.get_portfolio`
Get detailed portfolio data by ID.
**Requires Confirmation:** Yes
**Parameters:**
* `portfolioGid` (string, required): Portfolio GID
* `optFields` (string, optional): Comma-separated fields to include
**Example Usage:**
> "Get full details for portfolio 12345"
***
#### Get Portfolio Items
##### `asana.get_portfolio_items`
Get the items in a portfolio in compact form.
**Requires Confirmation:** Yes
**Parameters:**
* `portfolioGid` (string, required): Portfolio GID
* `optFields` (string, optional): Comma-separated fields to include
* `limit` (number, optional): Maximum number of items to return
**Example Usage:**
> "Show me all items in portfolio 12345"
***
### Search
#### Typeahead Search
##### `asana.typeahead_search`
Quick search across Asana objects.
**Requires Confirmation:** Yes
**Parameters:**
* `workspaceGid` (string, required): Workspace GID
* `resourceType` (string, required): Resource type to search
* `query` (string, optional): Search query
* `count` (number, optional): Number of results to return (1-100)
**Example Usage:**
> "Quick search for projects matching 'marketing' in workspace 12345"
***
## Common Use Cases
* Create and organize projects
* Track project progress
* Manage project timelines
* Create and assign tasks
* Update task status
* Set due dates and priorities
* Organize teams and workspaces
* Assign work to team members
* Track team productivity
* Search and filter tasks
* Bulk update task properties
* Generate status reports
## Best Practices
**Performance Tips:**
* Use `limit` parameter to control result size
* Utilize `optFields` to reduce data transfer
* Cache workspace and project GIDs for repeated use
* Use search for complex filtering needs
**Important Considerations:**
* GIDs are permanent unique identifiers
* Due dates must be in YYYY-MM-DD format
* Completed tasks remain in the system
* Rate limits apply per OAuth token
## Workflow Examples
### Example 1: Daily Task Management
```
1. List workspaces to identify your main workspace
2. Get tasks assigned to you
3. Update task status as you work
4. Mark tasks as completed
5. Create new tasks for tomorrow
```
### Example 2: Project Setup
```
1. Create a new project in the appropriate team
2. Add initial tasks to the project
3. Assign tasks to team members
4. Set due dates for milestones
5. Add relevant tags for organization
```
## Troubleshooting
| Issue | Solution |
| --------------------- | ------------------------------------------ |
| "Workspace not found" | Verify workspace GID using List Workspaces |
| "Invalid date format" | Use YYYY-MM-DD format for dates |
| "Permission denied" | Check OAuth token permissions |
| "Task not found" | Ensure task GID is correct and accessible |
## Related Integrations
* [Jira](/administration/integrations/jira) - For software development tracking
* [Linear](/administration/integrations/linear) - For modern issue tracking
* [Monday.com](/administration/integrations/monday) - For visual work management
## Support
For additional help with the Asana integration, contact [support@langdock.com](mailto:support@langdock.com)
# AWS Kendra
Source: https://docs.langdock.com/administration/integrations/aws-kendra
Intelligent enterprise search service powered by machine learning
## Overview
Intelligent enterprise search service powered by machine learning. Through Langdock's integration, you can access and manage AWS Kendra directly from your conversations.
**Authentication:** API Key (AWS Credentials)\
**Category:** AI & Search\
**Availability:** All workspace plans
## Available Actions
### Search
##### `awskendra.search`
Searches your Kendra index using natural language queries
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Required): Natural language query to search for in your Kendra index. For example: 'How do I reset my password?' or 'sales report Q3 2024'
* `pageSize` (NUMBER, Optional): Number of results to return per page. Maximum is 100. If not specified, defaults to 10
* `attributeFilter(see example below){"EqualsTo": {"Key": "field", "Value": "text"}}`
* `queryResultType` (SELECT, Optional): Filter results by type. Options: All results, Documents only, Answers only, Questions and answers only
* `pageNumber` (NUMBER, Optional): Page number to retrieve (1-10 for page size 10, 1-2 for page size 50). Note: AWS Kendra limits total retrievable results to 100
* `facets` (MULTI\_LINE\_TEXT, Optional): JSON array of document attribute names to get facet counts. Simple format: ``. Advanced format with max results: (see example below)
* `sortingConfiguration` (MULTI\_LINE\_TEXT, Optional): JSON object to sort results. Format: (see example below). IMPORTANT: Use 'Get Index Configuration' first to verify which fields are sortable. Only fields marked as sortable in your index will work
* `spellCorrection` (SELECT, Optional): Enable automatic spell correction for queries to improve search accuracy. Options: Enabled, Disabled
* `userContext` (MULTI\_LINE\_TEXT, Optional): JSON object for user-specific filtering. GenAI format: `JSON: email_id = user@example.com`. Standard format: (JSON format)
* `visitorId` (TEXT, Optional): Unique identifier for tracking user sessions (e.g., a GUID). Do not use personally identifiable information like email
* `requestedDocumentAttributes` (MULTI\_LINE\_TEXT, Optional): JSON array of document attribute names to include in response (max 100). Common fields: ``. Reduces response size by limiting fields
* `collapseConfiguration` (MULTI\_LINE\_TEXT, Optional): JSON object to group/collapse similar results. Basic: (see example below). With expansion: (see example below)
* `documentRelevanceOverrides` (MULTI\_LINE\_TEXT, Optional): JSON array to boost specific fields/values. Format: (see example below)
**Output:** Returns search results with the following structure:
* `totalResults`: Total number of results found
* `results`: Array of result objects containing:
* `id`: Document ID
* `title`: Document title
* `excerpt`: Document excerpt
* `uri`: Document URI
* `score`: Relevance score
* `attributes`: Document attributes (if available)
* `facets`: Facet results if requested
* `spellSuggestions`: Spell correction suggestions if available
* `featuredResults`: Featured results if available
***
### Get Index Configuration
##### `awskendra.getIndexConfiguration`
Returns field configuration for your Kendra index. Shows field names, types (STRING, DATE, LONG), and properties (searchable, sortable, facetable). Run this BEFORE searching to know which fields you can use for filtering and sorting.
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns index configuration with the following structure:
* `indexName`: Name of the index
* `status`: Index status
* `edition`: Index edition
* `fields`: Array of field objects containing:
* `name`: Field name
* `type`: Field type (STRING, DATE, LONG, etc.)
* `searchable`: Whether field is searchable
* `sortable`: Whether field is sortable
* `facetable`: Whether field is facetable
* `displayable`: Whether field is displayable
* `importance`: Field importance score
* `summary`: Summary statistics including:
* `totalFields`: Total number of fields
* `sortableFields`: List of sortable field names
* `searchableFields`: List of searchable field names
* `dateFields`: List of date field names
* `sortableDateFields`: List of sortable date field names
***
## Common Use Cases
Manage and organize your AWS Kendra data
Automate workflows with AWS Kendra
Generate insights and reports
Connect AWS Kendra with other tools
## Best Practices
**Getting Started:**
1. Enable the AWS Kendra integration in your workspace settings
2. Authenticate using API Key (AWS Credentials)
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | ------------------------------------------------- |
| Authentication failed | Verify your API Key (AWS Credentials) credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the AWS Kendra integration, contact [support@langdock.com](mailto:support@langdock.com)
# Azure AI Search
Source: https://docs.langdock.com/administration/integrations/azure-search
AI-powered information retrieval platform by Microsoft Azure
## Overview
Azure AI Search is Microsoft's AI-powered information retrieval platform. Through Langdock's integration, you can perform semantic vector searches across your indexed documents directly from conversations.
**Authentication:** API Key\
**Category:** AI & Search\
**Availability:** All workspace plans
***
## Prerequisites
Before setting up the integration, make sure you have:
* An Azure subscription with access to Azure AI Search
* An Azure AI Search service instance with at least one index
* An admin API key for your Azure AI Search service
* Documents uploaded to your index with vector embeddings (1536 dimensions for OpenAI's text-embedding-ada-002)
**Pro tip:** If you're new to Azure AI Search, check out Microsoft's [Vector Search documentation](https://learn.microsoft.com/en-us/azure/search/vector-search-overview) to set up your first index with vector search support.
***
## Setup
In Langdock, go to [Integrations](https://app.langdock.com/integrations) and find [**Azure AI Search**](https://app.langdock.com/integrations/81088af0-d6e2-4ed0-a76b-633670df7840) in the integrations list.
Fill in the required configuration fields (see [table](#configuration-parameters) below).
Save the integration - Langdock will validate that your index exists and is accessible.
Tag the integration with `@` in any chat or add the `Search documents` action to your assistant to search your indexed documents.
### Configuration Parameters
#### Required Fields
| Field | Description | Example |
| ---------------- | -------------------------------------------- | --------------------------------------- |
| **Name** | A name for this connection | `Company Knowledge Base` |
| **API Key** | Admin key from Azure Portal -> `Keys` | Your admin key |
| **Index Name** | The exact name of your Azure AI Search index | `langdock-prod-company` |
| **URL** | Your Azure AI Search service endpoint | `https://my-service.search.windows.net` |
| **Search Field** | The vector field name in your index schema | `contentVector` |
| **Top K** | Number of search results to retrieve | `5` |
#### Optional Fields
| Field | Description | Default |
| ----------------------- | ----------------------------------------- | ---------- |
| **Embedding Dimension** | Dimension of your vector embeddings | `1536` |
| **Embedding Model** | Model used for embeddings (display only) | Ada v2 |
| **Select** | Comma-separated fields to return | All fields |
| **Filter** | OData filter expression to narrow results | None |
**Where to find your credentials:**
* **Service URL:** Azure Portal -> Your Search service -> Overview -> copy the `Url` field
* **API Key:** Azure Portal -> Your Search service -> Keys -> copy an admin key
***
## Available Actions
### Search Documents
##### `azureaisearch.searchDocuments`
Performs semantic vector search across your indexed documents.
**Requires Confirmation:** No
**Parameters:**
* `query` (VECTOR, Required): Vector query for semantic search
**Output:** Returns search results with the following structure:
* `value`: Array of search result objects containing:
* `@search.score`: Relevance score
* `@search.highlights`: Highlighted text snippets
* Field values from the indexed documents
* `@odata.count`: Total number of results
* `@odata.nextLink`: Link to next page of results (if available)
**Generating embeddings:** You can use the [Langdock Embedding API](/api-endpoints/embedding/openai-embedding) to generate the vector embeddings needed for your Azure AI Search index.
***
## Common Use Cases
Search across internal documentation, policies, and knowledge bases using natural language
Find relevant research papers, reports, and data from large document collections
Quickly retrieve product information, FAQs, and support articles to answer customer queries
Surface relevant content from archives, wikis, or document repositories
***
## Troubleshooting
| Issue | Cause | Solution |
| ------------------------- | ------------------------------------ | ---------------------------------------------------------------------------------------- |
| **Index not found** | Index name mismatch or doesn't exist | Verify the exact index name in Azure Portal matches your configuration (case-sensitive) |
| **No search results** | No documents or invalid embeddings | Confirm documents are uploaded with valid 1536-dimension embeddings in your vector field |
| **Low search scores** | Embedding model mismatch | Ensure all documents use the same embedding model (e.g., text-embedding-ada-002) |
| **Authentication failed** | Invalid or expired API key | Copy a fresh Admin Key from Azure Portal -> Keys |
**Validation checklist**
* Service URL format: `https://[service-name].search.windows.net`
* Index name matches exactly (case-sensitive)
* Search field matches your vector field name (e.g., `contentVector`)
* Documents contain valid vector embeddings
***
## Support
For additional help with the Azure AI Search integration, contact [support@langdock.com](mailto:support@langdock.com).
# Big Query
Source: https://docs.langdock.com/administration/integrations/bigquery
Google Cloud BigQuery data warehouse for analytics and machine learning
## Overview
Google Cloud BigQuery data warehouse for analytics and machine learning. Through Langdock's integration, you can access and manage Big Query directly from your conversations.
**Authentication:** OAuth\
**Category:** Data & Analytics\
**Availability:** All workspace plans
## Available Actions
### List Datasets
##### `bigquery.listDatasets`
Lists all datasets in a BigQuery project
**Requires Confirmation:** No
**Parameters:**
* `projectId` (TEXT, Required): The Google Cloud project ID containing the datasets
**Output:** Returns an array of datasets with their IDs, names, and metadata
***
### List Tables
##### `bigquery.listTables`
Lists all tables in a BigQuery dataset
**Requires Confirmation:** No
**Parameters:**
* `projectId` (TEXT, Required): The Google Cloud project ID
* `datasetId` (TEXT, Required): The dataset ID containing the tables
**Output:** Returns an array of tables with their IDs, names, and metadata
***
### Get Table Schema
##### `bigquery.getTableSchema`
Gets the schema information for a specific BigQuery table
**Requires Confirmation:** No
**Parameters:**
* `projectId` (TEXT, Required): The Google Cloud project ID
* `datasetId` (TEXT, Required): The dataset ID containing the table
* `tableId` (TEXT, Required): The table ID to get schema information for
**Output:** Returns the table schema including field names, types, and constraints
***
### Execute Query
##### `bigquery.executeQuery`
Executes a SQL query in BigQuery and returns the results
**Requires Confirmation:** No
**Parameters:**
* `projectId` (TEXT, Required): The Google Cloud project ID to execute the query in
* `query` (MULTI\_LINE\_TEXT, Required): The SQL query to execute in BigQuery
* `useLegacySql` (BOOLEAN, Optional): Whether to use legacy SQL syntax (default: false for Standard SQL)
**Output:** Returns query results with the following structure:
* `jobReference`: Job reference information
* `totalRows`: Total number of rows in the result
* `rows`: Array of result rows containing field values
* `schema`: Schema of the result fields
* `jobComplete`: Whether the job completed successfully
***
### Get Table Data
##### `bigquery.getTableData`
Retrieves actual data rows from a BigQuery table
**Requires Confirmation:** No
**Parameters:**
* `projectId` (TEXT, Required): The Google Cloud project ID
* `datasetId` (TEXT, Required): The dataset ID containing the table
* `tableId` (TEXT, Required): The table ID to retrieve data from
* `maxResults` (NUMBER, Optional): Maximum number of rows to return (optional)
**Output:** Returns table data with rows and schema information
***
### Create Dataset
##### `bigquery.createDataset`
Creates a new dataset in BigQuery
**Requires Confirmation:** No
**Parameters:**
* `projectId` (TEXT, Required): The Google Cloud project ID
* `datasetId` (TEXT, Required): The ID for the new dataset
* `description` (TEXT, Optional): Optional description for the dataset
* `location` (TEXT, Optional): Geographic location for the dataset (e.g., US, EU)
**Output:** Returns the created dataset with its ID and metadata
***
### Create Table
##### `bigquery.createTable`
Creates a new table in a BigQuery dataset
**Requires Confirmation:** No
**Parameters:**
* `projectId` (TEXT, Required): The Google Cloud project ID
* `datasetId` (TEXT, Required): The dataset ID to create the table in
* `tableId` (TEXT, Required): The ID for the new table
* `description` (TEXT, Optional): Optional description for the table
* `schema` (MULTI\_LINE\_TEXT, Optional): Table schema as JSON array of field objects (optional)
**Output:** Returns the created table with its ID and schema information
***
### Insert Table Data
##### `bigquery.insertTableData`
Inserts data rows into a BigQuery table
**Requires Confirmation:** No
**Parameters:**
* `projectId` (TEXT, Required): The Google Cloud project ID
* `datasetId` (TEXT, Required): The dataset ID containing the table
* `tableId` (TEXT, Required): The table ID to insert data into
* `rows` (MULTI\_LINE\_TEXT, Required): JSON array of row objects to insert
* `ignoreUnknownValues` (BOOLEAN, Optional): Whether to ignore unknown values in the data
* `skipInvalidRows` (BOOLEAN, Optional): Whether to skip rows that contain invalid data
**Output:** Returns insertion results with success/failure information
***
### Get Dataset Info
##### `bigquery.getDatasetInfo`
Gets detailed information about a BigQuery dataset
**Requires Confirmation:** No
**Parameters:**
* `projectId` (TEXT, Required): The Google Cloud project ID
* `datasetId` (TEXT, Required): The dataset ID to get information for
**Output:** Returns dataset information including creation time, location, and access controls
***
## Common Use Cases
Manage and organize your Big Query data
Automate workflows with Big Query
Generate insights and reports
Connect Big Query with other tools
## Best Practices
**Getting Started:**
1. Enable the Big Query integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Big Query integration, contact [support@langdock.com](mailto:support@langdock.com)
# Confluence
Source: https://docs.langdock.com/administration/integrations/confluence
Collaborative workspace that connects teams with the content they need
## Overview
Collaborative workspace that connects teams with the content they need. Through Langdock's integration, you can access and manage Confluence directly from your conversations.
**Authentication:** OAuth\
**Category:** Productivity & Collaboration\
**Availability:** All workspace plans
## Available Actions
### Search
##### `confluence.search`
Searches for pages by content and title matching using text queries. Use this for partial title matches, keyword searches, or content searches.
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Optional): Text to search for in page titles and content. Searches both titles (with wildcard matching for partial matches) and full page content. Use for any user search request including keywords, phrases, or partial titles. Leave empty to get recent pages.
**Output:** Returns an array of search results with page information including:
* `id`: Page ID
* `title`: Page title
* `excerpt`: Page excerpt with highlights
* `url`: Page URL
* `space`: Space information
* `version`: Page version details
* `lastModified`: Last modified date
***
### Search (Native)
##### `confluence.searchNative`
Searches for pages and content within your Confluence workspace
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Optional): A query string for filtering the file results. If no query string is passed, it returns the most recent pages. This searches through the full text and the titles of the pages
**Output:** Returns search results optimized for file search functionality
***
### Download File
##### `confluence.downloadFile`
Downloads a specific file from your Confluence workspace
**Requires Confirmation:** No
**Parameters:**
* `itemId` (TEXT, Required): The unique identifier of the file you want to download from Confluence
**Output:** Returns the file content as binary data
***
### Get Pages
##### `confluence.getPages`
Gets pages using structured filters like space, status, or sorting. Use this when users want to list/browse pages from specific spaces, get recent pages, or filter by criteria (e.g., 'get all pages from Marketing space', 'show me recent pages', 'list draft pages').
**Requires Confirmation:** No
**Parameters:**
* `spaceId` (TEXT, Optional): Filter pages by specific space ID. If the user requests pages from a space and only provides the space name, first call the get\_spaces action to look up the space and retrieve its ID by name or key, then use that ID here.
* `status` (TEXT, Optional): Filter by page status. Use 'current' for published pages, 'draft' for draft pages. Default is 'current' if not specified.
* `sort` (TEXT, Optional): Sort order for results. Use 'created-date' for newest first, 'modified-date' for recently updated, 'title' for alphabetical. Use when user asks for recent, latest, or sorted results.
* `limit` (NUMBER, Optional): Number of results to return (1-250). Use when user specifies how many pages they want or for pagination. Default is 25.
* `cursor` (TEXT, Optional): Pagination cursor for getting next set of results. Use when user wants to continue browsing or get more results after initial query.
* `title` (TEXT, Optional): Filter by exact page title only (no partial matching). Use when user provides the complete, exact page title.
* `pageIds` (TEXT, Optional): Get specific pages by their IDs. Use when user provides specific page IDs or when you have IDs from previous queries.
**Output:** Returns an array of pages with their details including title, content, space, and metadata
***
### Get Folder
##### `confluence.getFolder`
Retrieves a specific Confluence folder by its ID. Use this action when users ask for pages of a folder.
**Requires Confirmation:** No
**Parameters:**
* `folderId` (TEXT, Required): The unique id of the folder you want to retrieve the contents of, can be found by using the search confluence action
**Output:** Returns folder contents and metadata
***
### Get Spaces
##### `confluence.getSpaces`
Lists all spaces in the Confluence workspace. Use this action when the user asks for available spaces, or when you need to look up a space ID by name or key before calling other actions (e.g., when a user requests pages from a space but only provides the space name).
**Requires Confirmation:** No
**Parameters:**
* `ids` (TEXT, Optional): Filter results to spaces with these IDs. Use when user provides specific space IDs.
* `keys` (TEXT, Optional): Filter results to spaces with these keys. Use when user provides specific space keys or short names.
* `type` (TEXT, Optional): Filter by space type (global, collaboration, knowledge\_base, personal). Use when user specifies a type of space.
* `status` (TEXT, Optional): Filter by space status (current, archived). Use when user specifies only active or archived spaces.
* `labels` (TEXT, Optional): Filter by space labels. Use when user specifies labels or tags for spaces.
* `favoritedBy` (TEXT, Optional): Filter to spaces favorited by a specific user (by account ID). Use when user asks for their favorite spaces.
* `notFavoritedBy` (TEXT, Optional): Filter to spaces NOT favorited by a specific user (by account ID).
* `sort` (TEXT, Optional): Sort the result by a particular field (id, key, name, etc).
* `descriptionFormat` (TEXT, Optional): Format for the space description field (plain, view).
* `includeIcon` (BOOLEAN, Optional): Set to true to include the space icon in the response.
* `cursor` (TEXT, Optional): Pagination cursor for getting next set of results.
* `limit` (NUMBER, Optional): Maximum number of spaces to return (1-250).
**Output:** Returns an array of spaces with their details including name, key, description, and metadata
***
### Create Page
##### `confluence.createPage`
Creates a new page in a Confluence space. Use this when users want to create new content, add a page, or publish information to Confluence.
**Requires Confirmation:** Yes
**Parameters:**
* `spaceId` (TEXT, Required): ID of the space where the page will be created. Required. Use get\_spaces action to find space ID if only space name is provided.
* `title` (TEXT, Optional): Title of the page. Required for published pages, optional for drafts. Use when user provides a page title or heading.
* `bodyContent` (MULTI\_LINE\_TEXT, Optional): Content of the page in Confluence storage format or HTML. Use when user provides page content, text, or wants to add information to the page.
* `status` (TEXT, Optional): Page status: 'current' for published pages, 'draft' for draft pages. Use 'draft' when user wants to save without publishing. Default is 'current'.
* `parentId` (TEXT, Optional): ID of parent page to create this page under. Use when user wants to create a sub-page or organize content hierarchically. Leave empty for space homepage.
* `subtype` (TEXT, Optional): Page subtype. Use 'live' to create a live document that supports real-time collaboration. Leave empty for regular pages.
* `private` (BOOLEAN, Optional): Set to true to create a private page that only the creator can view and edit. Use when user wants to create personal or private content.
* `rootLevel` (BOOLEAN, Optional): Set to true to create the page at space root level (outside space homepage tree). Use when user wants a top-level page not under the homepage.
* `embedded` (BOOLEAN, Optional): Set to true to create embedded content. Use for special content types that will be embedded elsewhere.
**Output:** Returns the created page with its ID and metadata
***
### Update Page
##### `confluence.updatePage`
Updates an existing Confluence page by ID. Can update title only (efficient) or full content. Use titleChangeOnly=true when users only want to change the page title/heading for token efficiency. Use titleChangeOnly=false for content changes.
**Requires Confirmation:** Yes
**Parameters:**
* `pageId` (TEXT, Required): ID of the page to update. Required. Can be found using search or get\_pages actions.
* `title` (TEXT, Required): New title for the page. Required. Use when user wants to change the page title or heading.
* `titleChangeOnly` (BOOLEAN, Optional): Set to true when only changing the page title (efficient, no version number needed). Set to false when updating content. Use true for rename operations to save tokens.
* `bodyContent` (MULTI\_LINE\_TEXT, Optional): New content for the page in Confluence storage format or HTML. Required when titleChangeOnly=false. Not needed when titleChangeOnly=true.
* `versionNumber` (NUMBER, Required): Current version number of the page. REQUIRED by Confluence API when titleChangeOnly=false (for optimistic locking to prevent edit conflicts). The version will be automatically incremented for the update. If not provided, it will be fetched automatically. Not needed when titleChangeOnly=true.
* `status` (TEXT, Optional): Page status: 'current' for published, 'draft' for draft. Use when user wants to publish/unpublish or change page status. Default is 'current'.
* `versionMessage` (TEXT, Optional): Optional message describing the changes made in this update. Use when user provides context about their changes. Only used when titleChangeOnly=false.
* `parentId` (TEXT, Optional): ID of new parent page to move this page under within the same space. Use when user wants to reorganize page hierarchy. Only used when titleChangeOnly=false.
* `ownerId` (TEXT, Optional): Account ID of new page owner to transfer ownership. Use when user wants to change who owns the page. Only used when titleChangeOnly=false.
* `spaceId` (TEXT, Optional): ID of containing space. Note: Moving pages to different spaces is not currently supported by the API. Only used when titleChangeOnly=false.
**Output:** Returns the updated page with its ID and metadata
***
### Get Page
##### `confluence.getPage`
Retrieves a specific Confluence page by its ID including its content and metadata
**Requires Confirmation:** No
**Parameters:**
* `pageId` (TEXT, Required): The unique id of the page you want to retrieve the content of, can be found by using the search confluence action
**Output:** Returns the page content and metadata
#### Triggers
***
### New Page
##### `confluence.newPage`
Triggers when new pages are published in Confluence
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns information about newly published pages
***
### Updated Page
##### `confluence.updatedPage`
Triggers when a published Confluence page is updated
**Requires Confirmation:** No
**Parameters:**
* `pageId` (TEXT, Required): The id of the page that should be monitored for updates
**Output:** Returns information about updated pages
***
## Common Use Cases
Manage and organize your Confluence data
Automate workflows with Confluence
Generate insights and reports
Connect Confluence with other tools
## Best Practices
**Getting Started:**
1. Enable the Confluence integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Confluence integration, contact [support@langdock.com](mailto:support@langdock.com)
# DeepL
Source: https://docs.langdock.com/administration/integrations/deepl
Advanced AI translation service delivering accurate, natural-sounding translations
## Overview
Advanced AI translation service delivering accurate, natural-sounding translations. Through Langdock's integration, you can access and manage DeepL directly from your conversations.
**Authentication:** API Key\
**Category:** Translation & Content\
**Availability:** All workspace plans
## Available Actions
### Translate Text
##### `deepl.translateText`
Translates text from one language to another using DeepL's translation service
**Requires Confirmation:** No
**Parameters:**
* `inputText` (MULTI\_LINE\_TEXT, Required): The text to translate
* `targetLanguage` (SELECT, Required): The Target Language. Options include: English (US), French, German, Italian, Spanish, Japanese, Dutch, Polish, Turkish, Russian, Chinese, Danish, Portuguese (Brazil), Korean, Indonesian, Hungarian, Czech, Greek, Bulgarian, Estonian, Lithuanian, Norwegian Bokmal, Latvian, Romanian, Slovak, Slovenian, Swedish, Ukrainian, Chinese (Simplified), Chinese (Traditional), Portuguese (Portugal)
* `sourceLanguage` (SELECT, Optional): The source language of the text. If not specified, DeepL will auto-detect the language. Required when using glossary id. Options include: Arabic, Bulgarian, Czech, Danish, German, Greek, English, Spanish, Estonian, Finnish, French, Hungarian, Indonesian, Italian, Japanese, Korean, Lithuanian, Latvian, Norwegian Bokmal, Dutch, Polish, Portuguese, Romanian, Russian, Slovak, Slovenian, Swedish, Turkish, Ukrainian, Chinese
* `glossaryId` (TEXT, Optional): The unique identifier of the glossary to use for translation. Requires source language to be set. The glossary language pair must match the translation language pair
* `formality` (SELECT, Optional): Sets the formality level of the translation. Available for target languages DE, FR, IT, ES, NL, PL, PT-BR, PT-PT, JA, and RU. Use prefer\_ options to fallback gracefully for unsupported languages. Options: default, more, less, prefer\_more, prefer\_less
**Output:** Returns translation results with the following structure:
* `translations`: Array of translation objects containing:
* `detected_source_language`: Detected source language code
* `text`: Translated text
* `usage`: Usage information including character count
***
### Improve Text
##### `deepl.improveText`
Improves and rephrases text in the specified target language with optional writing style or tone but not both
**Requires Confirmation:** No
**Parameters:**
* `inputText` (MULTI\_LINE\_TEXT, Required): The text to translate
* `targetLanguage` (SELECT, Optional): The Target Language. Options: French, German, Italian, Spanish, Portuguese (Brazil), Portuguese (Portugal), English, English (UK), English (US)
* `writingStyle` (SELECT, Optional): Specify a style to rephrase your text in a way that fits your audience and goals. The prefer\_ prefix allows falling back to the default tone if the language does not yet support tones. Options: academic, business, casual, default, simple, prefer\_academic, prefer\_business, prefer\_casual, prefer\_simple
* `tone` (SELECT, Optional): Specify the desired tone for your text. The prefer\_ prefix allows falling back to the default tone if the language does not yet support tones. Options: confident, default, diplomatic, enthusiastic, friendly, prefer\_confident, prefer\_diplomatic, prefer\_enthusiastic
**Output:** Returns improved text with the same structure as translation results
***
### Create Glossary
##### `deepl.createGlossary`
Creates a glossary
**Requires Confirmation:** Yes
**Parameters:**
* `name` (TEXT, Required): The name of the glossary you want to create
* `sourceLanguage` (TEXT, Required): The language in which the source texts in the glossary are specified. Example: en. Available options: da de en es fr it ja ko nb nl pl pt ro ru sv zh
* `targetLanguage` (TEXT, Required): The language in which the target texts in the glossary are specified. Example: en. Available options: da de en es fr it ja ko nb nl pl pt ro ru sv zh
* `entries` (TEXT, Required): The entries of the glossary. The entries have to be specified in the format provided by the entries\_format parameter. Example: 'Hello Guten Tag'
* `entriesFormat` (TEXT, Required): The format in which the glossary entries are provided. Formats currently available: 1. tsv (default) - tab-separated values, 2. csv - comma-separated values
**Output:** Returns the created glossary with its ID and metadata
***
### List All Glossaries
##### `deepl.listAllGlossaries`
Lists all available glossaries
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns an array of glossaries with their IDs, names, and language pairs
***
### Delete Glossary
##### `deepl.deleteGlossary`
Deletes a by id specified glossary
**Requires Confirmation:** Yes
**Parameters:**
* `glossaryId` (TEXT, Required): The unique identifier of the glossary you want to retrieve information about
**Output:** Returns confirmation of deletion
***
### Get Glossary
##### `deepl.getGlossary`
Retrieves meta information or entries of an single glossary specified by its id
**Requires Confirmation:** No
**Parameters:**
* `glossaryId` (TEXT, Required): The unique identifier of the glossary you want to retrieve information about
* `entries` (BOOLEAN, Optional): if checked it will retrieve the glossarys entries / content
**Output:** Returns glossary information including name, language pair, and optionally entries
***
### Get Usage
##### `deepl.getUsage`
Retrieves current usage information for your DeepL API account including character counts and limits
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns usage information including character count and limits
***
## Common Use Cases
Manage and organize your DeepL data
Automate workflows with DeepL
Generate insights and reports
Connect DeepL with other tools
## Best Practices
**Getting Started:**
1. Enable the DeepL integration in your workspace settings
2. Authenticate using API Key
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your API Key credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the DeepL integration, contact [support@langdock.com](mailto:support@langdock.com)
# ElevenLabs
Source: https://docs.langdock.com/administration/integrations/elevenlabs
AI-powered text-to-speech and speech-to-text platform with natural voice synthesis and accurate transcription capabilities
## Overview
AI-powered text-to-speech and speech-to-text platform with natural voice synthesis and accurate transcription capabilities. Through Langdock's integration, you can access and manage ElevenLabs directly from your conversations.
**Authentication:** API Key\
**Category:** Translation & Content\
**Availability:** All workspace plans
## Available Actions
### Text to Speech
##### `elevenlabs.texttoSpeech`
Convert text into natural-sounding speech using AI voices
**Requires Confirmation:** Yes
**Parameters:**
* `text` (MULTI\_LINE\_TEXT, Required): The text to convert to speech
* `voice_id` (TEXT, Optional): The ID of the voice to use. Default is 'Rachel' (21m00Tcm4TlvDq8ikWAM)
* `model_id` (SELECT, Optional): The model to use for text-to-speech. Options: Multilingual v2 (Default), Turbo v2.5 (Fast), Flash v2.5 (Very Fast)
* `output_format` (SELECT, Optional): The audio format for the output. Options: MP3 (44.1kHz), MP3 (22kHz), PCM (16kHz), PCM (22kHz), PCM (24kHz), PCM (44.1kHz)
**Output:** Returns audio file with the following structure:
* `fileName`: Generated audio file name
* `mimeType`: MIME type of the audio file
* `base64`: Base64 encoded audio data
***
### Get Recent Conversations
##### `elevenlabs.getRecentConversations`
List recent Conversational AI agent conversations with filtering options
**Requires Confirmation:** No
**Parameters:**
* `limit` (NUMBER, Optional): Maximum number of conversations to retrieve (default: 10, max: 100)
* `agent_id` (TEXT, Optional): Filter by specific agent ID (optional)
* `from_date` (TEXT, Optional): Filter conversations after this date (YYYY-MM-DD format)
* `to_date` (TEXT, Optional): Filter conversations before this date (YYYY-MM-DD format)
**Output:** Returns an array of conversations with their IDs, timestamps, and metadata
***
### Get Conversation Transcript
##### `elevenlabs.getConversationTranscript`
Retrieve transcript and details from Conversational AI agent conversations
**Requires Confirmation:** No
**Parameters:**
* `conversation_id` (TEXT, Required): The conversation ID from your Conversational AI history (e.g., qcRqzgdfTDNaCznQIlSJ)
**Output:** Returns conversation transcript and details including messages, timestamps, and participant information
***
### Get Item by Name
##### `elevenlabs.getItembyName`
Retrieves OneDrive items by name, providing their ID, name, URL, and other metadata
**Requires Confirmation:** No
**Parameters:**
* `filter` (TEXT, Required): Filter drive items by if their names contain the filter
**Output:** Returns an array of items with the following structure:
* `id`: Item ID
* `name`: Item name
* `webUrl`: Web URL for the item
* `createdDate`: Creation date
* `creator`: Creator information (name, email)
* `lastModifiedDateTime`: Last modified date
* `lastModifier`: Last modifier information (name, email)
***
### Get Sheet by Item ID
##### `elevenlabs.getSheetbyItemID`
Retrieves all worksheets of an Excel workbook
**Requires Confirmation:** No
**Parameters:**
* `itemId` (TEXT, Required): Id of the item of which you want to retrieve the worksheets of
**Output:** Returns an array of worksheets with their IDs and names
***
### Add Sheet to Workbook
##### `elevenlabs.addSheettoWorkbook`
Adds a worksheet to an existing Excel workbook
**Requires Confirmation:** Yes
**Parameters:**
* `itemId` (TEXT, Required): The id of the item which contains the Excel sheet
* `sheetName` (TEXT, Required): The name of the sheet you want to add to the workbook
**Output:** Returns the created sheet with its ID and name
***
### Get Tables
##### `elevenlabs.getTables`
Retrieves all tables from a worksheet, specified by item id and sheet id
**Requires Confirmation:** No
**Parameters:**
* `itemId` (TEXT, Required): Id of the item of which you want to retrieve the tables of
* `sheetId` (TEXT, Required): The id of the sheet where the table is located
**Output:** Returns an array of tables with their IDs and metadata
***
### Get All Table Columns
##### `elevenlabs.getAllTableColumns`
Retrieves all columns from a table
**Requires Confirmation:** No
**Parameters:**
* `tableId` (TEXT, Required): The id of the table you want to fetch the rows of
* `itemId` (TEXT, Required): The id of the item which contains the Excel sheet
**Output:** Returns an array of columns with their names and types
***
### Get Single Table Row
##### `elevenlabs.getSingleTableRow`
Retrieves a specific row from a table given its row index
**Requires Confirmation:** No
**Parameters:**
* `itemId` (TEXT, Required): The id of the item which contains the Excel sheet
* `tableId` (TEXT, Required): The id of the table you want to fetch the rows of
* `rowIndex` (TEXT, Required): Index (number) of the row you want to retrieve
**Output:** Returns the row data with values for each column
***
### Update Table Row
##### `elevenlabs.updateTableRow`
Updates a specific row in a table, requiring values for each column
**Requires Confirmation:** Yes
**Parameters:**
* `rowIndex` (TEXT, Required): The id of the row you want to update
* `rowValues` (TEXT, Required): The values of the row you want to insert. Enter them separated by a comma like: hello,world,4 this will insert hello into the first column and world in the second column and 4 into the third column of the table same for numbers or dates separate every new value with a comma
* `itemId` (TEXT, Required): The id of the item which contains the Excel sheet
* `tableId` (TEXT, Required): The id of the table you want to fetch the rows of
**Output:** Returns confirmation of the update
***
### Add Table Row
##### `elevenlabs.addTableRow`
Adds a new row to the end of a table
**Requires Confirmation:** Yes
**Parameters:**
* `itemId` (TEXT, Required): The id of the item which contains the Excel sheet
* `tableId` (TEXT, Required): The id of the table you want to fetch the rows of
* `rowValues` (TEXT, Required): The values of the row you want to insert. Enter them separated by a comma like: hello,world,4 this will insert hello into the first column and world in the second column and 4 into the third column of the table same for numbers or dates separate every new value with a comma
**Output:** Returns confirmation of the row addition
***
### Delete Table Row
##### `elevenlabs.deleteTableRow`
Deletes a specific row from a table given its row index
**Requires Confirmation:** Yes
**Parameters:**
* `rowIndex` (NUMBER, Required): Index of the row you want to delete
* `itemId` (TEXT, Required): The id of the item which contains the Excel sheet
* `tableId` (TEXT, Required): The id of the table you want to fetch the rows of
**Output:** Returns confirmation of the row deletion
***
## Common Use Cases
Manage and organize your ElevenLabs data
Automate workflows with ElevenLabs
Generate insights and reports
Connect ElevenLabs with other tools
## Best Practices
**Getting Started:**
1. Enable the ElevenLabs integration in your workspace settings
2. Authenticate using API Key
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your API Key credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the ElevenLabs integration, contact [support@langdock.com](mailto:support@langdock.com)
# Excel
Source: https://docs.langdock.com/administration/integrations/excel
Spreadsheet software for data organization, analysis, and visualization
## Overview
Spreadsheet software for data organization, analysis, and visualization. Through Langdock's integration, you can access and manage Excel directly from your conversations.
**Authentication:** OAuth\
**Category:** Microsoft 365\
**Availability:** All workspace plans
## Available Actions
### Get Item by Name
##### `excel.getItembyName`
Retrieves OneDrive items by name, providing their ID, name, URL, and other metadata
**Requires Confirmation:** No
**Parameters:**
* `filter` (TEXT, Required): Filter drive items by if their names contain the filter
**Output:** Returns an array of items with the following structure:
* `id`: Item ID
* `name`: Item name
* `webUrl`: Web URL for the item
* `createdDate`: Creation date
* `creator`: Creator information (name, email)
* `lastModifiedDateTime`: Last modified date
* `lastModifier`: Last modifier information (name, email)
***
### Get Sheet by Item ID
##### `excel.getSheetbyItemID`
Retrieves all worksheets of an Excel workbook
**Requires Confirmation:** No
**Parameters:**
* `itemId` (TEXT, Required): Id of the item of which you want to retrieve the worksheets of
**Output:** Returns an array of worksheets with their IDs and names
***
### Add Sheet to Workbook
##### `excel.addSheettoWorkbook`
Adds a worksheet to an existing Excel workbook
**Requires Confirmation:** Yes
**Parameters:**
* `itemId` (TEXT, Required): The id of the item which contains the Excel sheet
* `sheetName` (TEXT, Required): The name of the sheet you want to add to the workbook
**Output:** Returns the created sheet with its ID and name
***
### Get Tables
##### `excel.getTables`
Retrieves all tables from a worksheet, specified by item id and sheet id
**Requires Confirmation:** No
**Parameters:**
* `itemId` (TEXT, Required): Id of the item of which you want to retrieve the tables of
* `sheetId` (TEXT, Required): The id of the sheet where the table is located
**Output:** Returns an array of tables with their IDs and metadata
***
### Get All Table Columns
##### `excel.getAllTableColumns`
Retrieves all columns from a table
**Requires Confirmation:** No
**Parameters:**
* `tableId` (TEXT, Required): The id of the table you want to fetch the rows of
* `itemId` (TEXT, Required): The id of the item which contains the Excel sheet
**Output:** Returns an array of columns with their names and types
***
### Get Single Table Row
##### `excel.getSingleTableRow`
Retrieves a specific row from a table given its row index
**Requires Confirmation:** No
**Parameters:**
* `itemId` (TEXT, Required): The id of the item which contains the Excel sheet
* `tableId` (TEXT, Required): The id of the table you want to fetch the rows of
* `rowIndex` (TEXT, Required): Index (number) of the row you want to retrieve
**Output:** Returns the row data with values for each column
***
### Update Table Row
##### `excel.updateTableRow`
Updates a specific row in a table, requiring values for each column
**Requires Confirmation:** Yes
**Parameters:**
* `rowIndex` (TEXT, Required): The id of the row you want to update
* `rowValues` (TEXT, Required): The values of the row you want to insert. Enter them separated by a comma like: hello,world,4 this will insert hello into the first column and world in the second column and 4 into the third column of the table same for numbers or dates separate every new value with a comma
* `itemId` (TEXT, Required): The id of the item which contains the Excel sheet
* `tableId` (TEXT, Required): The id of the table you want to fetch the rows of
**Output:** Returns confirmation of the update
***
### Add Table Row
##### `excel.addTableRow`
Adds a new row to the end of a table
**Requires Confirmation:** Yes
**Parameters:**
* `itemId` (TEXT, Required): The id of the item which contains the Excel sheet
* `tableId` (TEXT, Required): The id of the table you want to fetch the rows of
* `rowValues` (TEXT, Required): The values of the row you want to insert. Enter them separated by a comma like: hello,world,4 this will insert hello into the first column and world in the second column and 4 into the third column of the table same for numbers or dates separate every new value with a comma
**Output:** Returns confirmation of the row addition
***
### Delete Table Row
##### `excel.deleteTableRow`
Deletes a specific row from a table given its row index
**Requires Confirmation:** Yes
**Parameters:**
* `rowIndex` (NUMBER, Required): Index of the row you want to delete
* `itemId` (TEXT, Required): The id of the item which contains the Excel sheet
* `tableId` (TEXT, Required): The id of the table you want to fetch the rows of
**Output:** Returns confirmation of the row deletion
***
## Common Use Cases
Manage and organize your Excel data
Automate workflows with Excel
Generate insights and reports
Connect Excel with other tools
## Best Practices
**Getting Started:**
1. **Prerequisite:** A Microsoft Admin must [approve the Langdock application](/administration/microsoft-admin-approval) in your Microsoft workspace once.
2. Enable the Excel integration in your workspace settings
3. Authenticate using OAuth
4. Test the connection with a simple read operation
5. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Excel integration, contact [support@langdock.com](mailto:support@langdock.com)
# GitHub
Source: https://docs.langdock.com/administration/integrations/github
Github allows developers to create, store, manage, and share their code
## Overview
Github allows developers to create, store, manage, and share their code. Through Langdock's integration, you can access and manage GitHub directly from your conversations.
**Authentication:** OAuth\
**Category:** Development & Issue Tracking\
**Availability:** All workspace plans
## Available Actions
### List Pull Requests
##### `github.listPullRequests`
Lists all pull requests in a repository
**Requires Confirmation:** No
**Parameters:**
* `owner` (TEXT, Required): Owner of the repository
* `repository` (TEXT, Required): Repository name
**Output:** Returns an array of pull requests with the following structure:
* `id`: Pull request ID
* `number`: Pull request number
* `title`: Pull request title
* `body`: Pull request description
* `state`: Pull request state (open, closed, merged)
* `created_at`: Creation date
* `updated_at`: Last update date
* `user`: Author information
* `head`: Source branch information
* `base`: Target branch information
***
### Get Pull Request
##### `github.getPullRequest`
Retrieves detailed information about the specified pull request
**Requires Confirmation:** No
**Parameters:**
* `owner` (TEXT, Required): Owner of the repository you want to get the pull requests details from
* `repository` (TEXT, Required): The repository to look into
* `pullRequestNumber` (TEXT, Required): The number of the pull request you are interested in
**Output:** Returns detailed pull request information including commits, files changed, and review status
***
### Get Pull Request Commits
##### `github.getPullRequestCommits`
Gets the commits of a given pull request
**Requires Confirmation:** No
**Parameters:**
* `owner` (TEXT, Required): Owner of the repository you want to get the pull requests commits from
* `repository` (TEXT, Required): The repository you want to retrieve the pull requests commits from
* `pullRequestNumber` (TEXT, Required): The number of the pull request
**Output:** Returns an array of commits with their details including SHA, message, author, and date
***
### Create Pull Request
##### `github.createPullRequest`
Creates a pull request
**Requires Confirmation:** Yes
**Parameters:**
* `owner` (TEXT, Required): The owner of the Github repository you want to create a pull request for
* `repository` (TEXT, Required): The name of the Github repository you want to create a pull request for
* `title` (TEXT, Required): The title of the pull request
* `body` (MULTI\_LINE\_TEXT, Required): The body / description of the pull request
* `targetBranch` (TEXT, Required): The name of the branch you want to merge the changes into
* `githubUsername` (TEXT, Required): GitHub username
* `sourceBranch` (TEXT, Required): Source Branch name
**Output:** Returns the created pull request with its number and details
***
### List Issues
##### `github.listIssues`
Lists all issues in a given repository
**Requires Confirmation:** No
**Parameters:**
* `owner` (TEXT, Required): Owner of the repository
* `repository` (TEXT, Required): Repository name
**Output:** Returns an array of issues with their details including number, title, body, state, and labels
***
### Create Issue
##### `github.createIssue`
Creates an issue for a specified repository
**Requires Confirmation:** Yes
**Parameters:**
* `owner` (TEXT, Required): The owner of the Github repository you want to create a pull request for
* `repository` (TEXT, Required): The name of the Github repository you want to create a pull request for
* `title` (TEXT, Required): The title of the pull request
* `body` (MULTI\_LINE\_TEXT, Required): The body / description of the pull request
* `assignees` (TEXT, Optional): GitHub usernames of people who should be assigned to this issue. You can provide multiple assignees as a comma-separated list (e.g., username1, username2) or a single username
* `labels` (TEXT, Optional): Labels to associate with this issue. You can provide multiple labels as a comma-separated list (e.g., bug, enhancement) or a single label name
**Output:** Returns the created issue with its number and details
***
### Update Issue
##### `github.updateIssue`
Updates a specified issue
**Requires Confirmation:** Yes
**Parameters:**
* `owner` (TEXT, Required): The owner of the Github repository you want to create a pull request for
* `repository` (TEXT, Required): The name of the Github repository you want to create a pull request for
* `title` (TEXT, Optional): The title of the pull request
* `body` (MULTI\_LINE\_TEXT, Optional): The body / description of the pull request
* `assignees` (TEXT, Optional): GitHub usernames of people who should be assigned to this issue. You can provide multiple assignees as a comma-separated list (e.g., username1, username2) or a single username
* `labels` (TEXT, Optional): Labels to associate with this issue. You can provide multiple labels as a comma-separated list (e.g., bug, enhancement) or a single label name
* `issueNumber` (TEXT, Required): The number of the issue you want to edit
**Output:** Returns the updated issue with its details
***
### Create Issue Comment
##### `github.createIssueComment`
Creates a comment on a specified issue
**Requires Confirmation:** Yes
**Parameters:**
* `owner` (TEXT, Required): The owner of the Github repository you want to create a pull request for
* `repository` (TEXT, Required): The name of the Github repository you want to create a pull request for
* `issueNumber` (TEXT, Required): The number of the issue you want to create a comment for
* `comment` (TEXT, Required): The comment you want to create
**Output:** Returns the created comment with its ID and details
***
## Common Use Cases
Manage and organize your GitHub data
Automate workflows with GitHub
Generate insights and reports
Connect GitHub with other tools
## Best Practices
**Getting Started:**
1. Enable the GitHub integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the GitHub integration, contact [support@langdock.com](mailto:support@langdock.com)
# Gmail
Source: https://docs.langdock.com/administration/integrations/gmail
Google's email service for sending, receiving, and managing emails
## Overview
Google's email service for sending, receiving, and managing emails. Through Langdock's integration, you can access and manage Gmail directly from your conversations.
**Authentication:** OAuth\
**Category:** Google Workspace\
**Availability:** All workspace plans
## Available Actions
### Create Email Draft
##### `gmail.createEmailDraft`
Creates a draft email in your Gmail account without sending it
**Requires Confirmation:** Yes
**Parameters:**
* `mailRecipient` (TEXT, Required): Email address of the person you want to send the email to
* `mailSubject` (TEXT, Optional): Subject line for the email draft
* `mailBody` (MULTI\_LINE\_TEXT, Optional): Main content of the email draft. Provide HTML-formatted content with proper tags: use `` for bold text, `` for italic text, `
` for bullet lists, `
` for numbered lists, and ` ` for line breaks. `Example: Important: This is a test email.
Features:
First item
Second item
`
* `cc` (TEXT, Optional): Email addresses to carbon copy on this reply. Multiple addresses should be separated by commas, for example: [mats@acme.com](mailto:mats@acme.com), [jonas@acme.com](mailto:jonas@acme.com)
* `bcc` (TEXT, Optional): Email addresses to blind carbon copy on this reply. Recipients won't see who was BCC'd. Multiple addresses should be separated by commas, for example: [mats@acme.com](mailto:mats@acme.com), [jonas@acme.com](mailto:jonas@acme.com)
* `attachments` (FILE, Optional): Files to attach to the draft
**Output:** Returns the created draft with its ID and details
***
### Reply to Email
##### `gmail.replytoEmail`
Creates and immediately sends an email from your Gmail account
**Requires Confirmation:** Yes
**Parameters:**
* `mailSubject` (TEXT, Required): Subject line for the email
* `mailRecipient` (TEXT, Required): Email address of the person you want to send the email to
* `mailBody` (MULTI\_LINE\_TEXT, Required): Main content of the email. Provide HTML-formatted content with proper tags: use `` for bold text, `` for italic text, `
` for bullet lists, `
` for numbered lists, and ` ` for line breaks. `Example: Important: This is a test email.
Features:
First item
Second item
`
* `threadId` (TEXT, Required): The unique identifier of the email thread you want to send a reply to
* `cc` (TEXT, Optional): Email addresses to carbon copy on this reply. Multiple addresses should be separated by commas, for example: [mats@acme.com](mailto:mats@acme.com), [jonas@acme.com](mailto:jonas@acme.com)
* `bcc` (TEXT, Optional): Email addresses to blind carbon copy on this reply. Recipients won't see who was BCC'd. Multiple addresses should be separated by commas, for example: [mats@acme.com](mailto:mats@acme.com), [jonas@acme.com](mailto:jonas@acme.com)
* `attachments` (FILE, Optional): Files to attach to the reply
**Output:** Returns the sent email with its ID and details
***
### Send Email
##### `gmail.sendEmail`
Creates and immediately sends an email from your Gmail account
**Requires Confirmation:** Yes
**Parameters:**
* `mailSubject` (TEXT, Required): Subject line for the email
* `mailRecipient` (TEXT, Required): Email address of the person you want to send the email to
* `mailBody` (MULTI\_LINE\_TEXT, Required): Main content of the email. Provide HTML-formatted content with proper tags: use `` for bold text, `` for italic text, `
` for bullet lists, `
` for numbered lists, and ` ` for line breaks. `Example: Important: This is a test email.
Features:
First item
Second item
`
* `cc` (TEXT, Optional): Email addresses to carbon copy on this reply. Multiple addresses should be separated by commas, for example: [mats@acme.com](mailto:mats@acme.com), [jonas@acme.com](mailto:jonas@acme.com)
* `bcc` (TEXT, Optional): Email addresses to blind carbon copy on this reply. Recipients won't see who was BCC'd. Multiple addresses should be separated by commas, for example: [mats@acme.com](mailto:mats@acme.com), [jonas@acme.com](mailto:jonas@acme.com)
* `attachments` (FILE, Optional): Files to attach to the email
**Output:** Returns the sent email with its ID and details
***
### Search Emails
##### `gmail.searchEmails`
Searches through your Gmail inbox and returns emails matching your search criteria
**Requires Confirmation:** No
**Parameters:**
* `filter` (TEXT, Optional): Search term to find specific emails. You can search by sender, recipient, subject, or content within the email body. Only return messages matching the specified query. Supports the same query format as the Gmail search box. For example, `'from:someuser@example.com rfc822msgid:` is:unread'. Please only set this field, if you really need to filter the emails. If you just want the latest, leave it empty.
**Output:** Returns search results with the following structure:
* `message`: Summary message about the search results
* `emails`: Array of email objects containing:
* `messageId`: Email message ID
* `threadId`: Email thread ID
* `subject`: Email subject
* `date`: Email date
* `from`: Sender information
* `to`: Recipient information
* `snippet`: Email snippet
* `body`: Email body content
* `hasAttachments`: Whether the email has attachments
* `attachmentCount`: Number of attachments
***
### Get Email with Attachments
##### `gmail.getEmailwithAttachments`
Retrieves a single email thread or message with full content including attachments. Returns all messages in the thread and their attachment files.
**Requires Confirmation:** No
**Parameters:**
* `threadId` (TEXT, Optional): The unique identifier of the email thread to retrieve. Either threadId or messageId is required.
* `messageId` (TEXT, Optional): The unique identifier of a specific email message to retrieve. Either threadId or messageId is required.
**Output:** Returns the email thread or message with full content and attachments
***
### Create Draft Reply
##### `gmail.createDraftReply`
Creates a draft reply for an email.
**Requires Confirmation:** Yes
**Parameters:**
* `threadId` (TEXT, Required): The id of the thread where the message is in that should be answered.
* `replyContent` (MULTI\_LINE\_TEXT, Required): The email content that should be the content of the draft. Provide HTML-formatted content with proper tags: use `` for bold text, `` for italic text, `
` for bullet lists, `
` for numbered lists, and ` ` for line breaks. `Example: Important: This is a test email.
Features:
First item
Second item
`
* `to` (TEXT, Optional): Email address of the recipient. If left empty, the reply will be sent to the sender of the last message in the thread. Multiple addresses should be separated by commas, for example: [mats@acme.com](mailto:mats@acme.com), [karl@acme.com](mailto:karl@acme.com)
* `cc` (TEXT, Optional): Email addresses to carbon copy on this reply. Multiple addresses should be separated by commas, for example: [mats@acme.com](mailto:mats@acme.com), [jonas@acme.com](mailto:jonas@acme.com)
* `bcc` (TEXT, Optional): Email addresses to blind carbon copy on this reply. Recipients won't see who was BCC'd. Multiple addresses should be separated by commas, for example: [mats@acme.com](mailto:mats@acme.com), [jonas@acme.com](mailto:jonas@acme.com)
* `attachments` (FILE, Optional): Files to attach to the draft reply
**Output:** Returns the created draft reply with its ID and details
#### Triggers
***
### New Email
##### `gmail.newEmail`
Triggers when a new email is received in your inbox (sent emails are excluded)
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns information about newly received emails
***
### New Email Matching Search
##### `gmail.newEmailMatchingSearch`
Triggers when new emails matching your search query are received (sent emails excluded unless you add 'in:sent')
**Requires Confirmation:** No
**Parameters:**
* `searchQuery` (TEXT, Required): Google search query to filter emails. Examples: 'from:[johndoe@example.com](mailto:johndoe@example.com)', 'subject:Important', 'has:attachment'. Sent emails are automatically excluded unless you include 'in:sent' in your query
**Output:** Returns information about emails matching the search criteria
***
## Common Use Cases
Manage and organize your Gmail data
Automate workflows with Gmail
Generate insights and reports
Connect Gmail with other tools
## Best Practices
**Getting Started:**
1. Enable the Gmail integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Gmail integration, contact [support@langdock.com](mailto:support@langdock.com)
# Google Analytics
Source: https://docs.langdock.com/administration/integrations/google-analytics
Access Google Analytics data and generate comprehensive reports for website performance analysis
## Overview
Access Google Analytics data and generate comprehensive reports for website performance analysis. Through Langdock's integration, you can access and manage Google Analytics directly from your conversations.
**Authentication:** OAuth\
**Category:** Google Workspace\
**Availability:** All workspace plans
## Available Actions
### Get Realtime Users by Device
##### `googleanalytics.getRealtimeUsersbyDevice`
Retrieves active users in the last 30 minutes broken down by device category (desktop, mobile, tablet)
**Requires Confirmation:** No
**Parameters:**
* `propertyId` (TEXT, Required): The Google Analytics 4 property ID. This is the numeric identifier for your GA4 property (e.g., 123456789)
**Output:** Returns realtime user data broken down by device category
***
### Run Report
##### `googleanalytics.runReport`
Generate custom analytics reports with flexible date ranges, metrics, and dimensions
**Requires Confirmation:** No
**Parameters:**
* `propertyId` (TEXT, Required): The Google Analytics 4 property ID. This is the numeric identifier for your GA4 property (e.g., 123456789)
* `startDate` (TEXT, Required): Start date for the report in YYYY-MM-DD format or relative date (e.g., '7daysAgo', 'yesterday')
* `endDate` (TEXT, Required): End date for the report in YYYY-MM-DD format or relative date (e.g., 'yesterday', 'today')
* `metrics` (TEXT, Required): The metric to include in the report (e.g., 'sessions', 'activeUsers', 'screenPageViews', 'bounceRate')
* `dimensions` (TEXT, Optional): Optional dimension to group the data by (e.g., 'country', 'deviceCategory', 'pagePath', 'sessionSource')
* `pageFilter` (TEXT, Optional): Optional filter to include only pages containing this text (e.g., '/de/pricing', '/features', '/blog')
**Output:** Returns analytics report data with the following structure:
* `dimensionHeaders`: Information about the dimensions in the report
* `metricHeaders`: Information about the metrics in the report
* `rows`: Array of data rows containing dimension and metric values
* `totals`: Summary totals for the report
* `rowCount`: Number of rows in the report
***
### Run Pivot Report
##### `googleanalytics.runPivotReport`
Generate advanced pivot table reports for data analysis, correlation discovery, and comparing high/low performers across multiple dimensions
**Requires Confirmation:** No
**Parameters:**
* `propertyId` (TEXT, Required): The Google Analytics 4 property ID. This is the numeric identifier for your GA4 property (e.g., 123456789)
* `startDate` (TEXT, Required): Start date for the report in YYYY-MM-DD format or relative date (e.g., '30daysAgo', '7daysAgo')
* `endDate` (TEXT, Required): End date for the report in YYYY-MM-DD format or relative date (e.g., 'yesterday', 'today')
* `metrics` (TEXT, Required): Comma-separated list of metrics to analyze (e.g., 'sessions,activeUsers,screenPageViews,bounceRate')
* `dimensions` (TEXT, Required): Comma-separated list of ALL dimensions you want to analyze. Note: In pivot reports, only dimensions that are also specified in 'Pivot dimensions' will be used to avoid API errors.
* `pivotDimensions` (TEXT, Required): Comma-separated list of specific dimensions to use as pivot columns/rows. These should be a subset of the dimensions above (e.g., 'deviceCategory' or 'deviceCategory,country')
**Output:** Returns pivot table report data with cross-tabulated results
***
### Get Metadata
##### `googleanalytics.getMetadata`
Retrieve all available dimensions and metrics for the property, including custom dimensions and metrics
**Requires Confirmation:** No
**Parameters:**
* `propertyId` (TEXT, Required): The Google Analytics 4 property ID. This is the numeric identifier for your GA4 property (e.g., 123456789)
**Output:** Returns metadata including all available dimensions and metrics for the property
***
### Analyze Content Performance
##### `googleanalytics.analyzeContentPerformance`
Specialized report for analyzing page/content performance to identify high and low performers, detect anomalies, and understand topic effectiveness
**Requires Confirmation:** No
**Parameters:**
* `propertyId` (TEXT, Required): The Google Analytics 4 property ID. This is the numeric identifier for your GA4 property (e.g., 123456789)
* `startDate` (TEXT, Required): Start date for the analysis in YYYY-MM-DD format or relative date (e.g., '30daysAgo', '7daysAgo')
* `endDate` (TEXT, Required): End date for the analysis in YYYY-MM-DD format or relative date (e.g., 'yesterday', 'today')
* `sortBy` (TEXT, Optional): Metric to sort results by to identify top/bottom performers (e.g., 'sessions', 'activeUsers', 'engagementRate', 'bounceRate')
* `limit` (NUMBER, Optional): Maximum number of pages/content to return (default: 50, max: 100)
* `pageFilter` (TEXT, Optional): Filter to include only pages containing this text (e.g., 'pricing', 'blog', 'features'). Leave empty for all pages.
* `metrics` (TEXT, Optional): Optional comma-separated list of metrics to include (e.g., 'sessions,activeUsers,screenPageViews')
* `dimensions` (TEXT, Optional): Optional comma-separated list of dimensions to group by (e.g., 'pagePath,pageTitle')
**Output:** Returns content performance analysis with page rankings and performance metrics
***
### Analyze Traffic Sources
##### `googleanalytics.analyzeTrafficSources`
Analyze traffic sources, acquisition channels, and marketing performance to detect anomalies and identify effective channels
**Requires Confirmation:** No
**Parameters:**
* `propertyId` (TEXT, Required): The Google Analytics 4 property ID. This is the numeric identifier for your GA4 property (e.g., 123456789)
* `startDate` (TEXT, Required): Start date for the analysis in YYYY-MM-DD format or relative date (e.g., '30daysAgo', '7daysAgo')
* `endDate` (TEXT, Required): End date for the analysis in YYYY-MM-DD format or relative date (e.g., 'yesterday', 'today')
* `groupBy` (SELECT, Optional): Dimension to group traffic sources by (source, medium, campaign, channelGroup)
* `metrics` (TEXT, Optional): Optional comma-separated list of metrics to include (e.g., 'sessions,activeUsers,newUsers,engagementRate')
* `sortBy` (TEXT, Optional): Optional metric name to sort results by (e.g., 'sessions')
**Output:** Returns traffic source analysis with channel performance metrics
***
### Batch Run Reports
##### `googleanalytics.batchRunReports`
Generate multiple custom reports in a single API call for efficient data analysis and comparison
**Requires Confirmation:** No
**Parameters:**
* `propertyId` (TEXT, Required): The Google Analytics 4 property ID. This is the numeric identifier for your GA4 property (e.g., 123456789)
* `reportRequests` (MULTI\_LINE\_TEXT, Required): JSON array of report configurations. Each report should include startDate, endDate, metrics, and dimensions. Example: \[startDate': '7daysAgo', 'endDate': 'yesterday', 'metrics': 'sessions', 'dimensions': 'country, startDate': '30daysAgo', 'endDate': 'yesterday', 'metrics': 'activeUsers', 'dimensions': 'sessionSource]
**Output:** Returns multiple reports in a single response for efficient data analysis
***
### Add Event
##### `googleanalytics.addEvent`
Creates a new event in a specific calendar
**Requires Confirmation:** Yes
**Parameters:**
* `sendUpdates` (TEXT, Optional): Whether to send notifications ('all', 'externalOnly', 'none')
* `startTime` (TEXT, Required): Start time in RFC3339 format (e.g. '2025-03-15T09:00:00+01:00')
* `description` (TEXT, Optional): Description of the event (optional)
* `attendees` (TEXT, Optional): List of attendees email adresses (optional)
* `recurrence` (TEXT, Optional): List of RRULE, EXRULE, RDATE and EXDATE lines for a recurring event, as specified in RFC5545. Note that DTSTART and DTEND lines are not allowed. The separator between rules is an empty space.
* `endTime` (TEXT, Required): End time in RFC3339 format (e.g. '2025-03-15T10:00:00+01:00')
* `calendarId` (TEXT, Required): The id of the calendar (e.g. 'primary' for the main calendar)
* `location` (TEXT, Optional): Location of the event (optional)
* `timeZone` (TEXT, Required): IMPORTANT: If you don't know the time zone of the user, ask him. DO NOT GUESS THE TIME ZONE (Example fromat:. 'America/New\_York')
* `title` (TEXT, Required): Title of the event
* `eventType` (TEXT, Optional): The type of event. Allowed values: default, focusTime, outOfOffice, workingLocation, etc. See Google Calendar API docs for full list. Provide one event type only
**Output:** Returns the created event with the following structure:
* `id`: Event ID
* `summary`: Event title
* `description`: Event description
* `start`: Start time information with dateTime and timeZone
* `end`: End time information with dateTime and timeZone
* `location`: Event location
* `attendees`: Array of attendee objects with email addresses
* `recurrence`: Recurrence rules if applicable
* `htmlLink`: Link to view the event in Google Calendar
* `created`: Creation timestamp
* `updated`: Last update timestamp
***
### Update Event
##### `googleanalytics.updateEvent`
Updates an event
**Requires Confirmation:** Yes
**Parameters:**
* `eventId` (TEXT, Required): id of the event
* `endTime` (TEXT, Optional): End time in RFC3339 format (e.g. '2025-03-15T10:00:00+01:00')
* `startTime` (TEXT, Optional): New start time in RFC3339 format (e.g. '2025-03-15T09:00:00+01:00')
* `calendarId` (TEXT, Required): The id of the calendar (e.g. 'primary' for the main calendar)
* `timeZone` (TEXT, Optional): New time zone (e.g. 'America/New\_York')
* `eventTitle` (TEXT, Optional): New title of the event (optional)
* `attendees` (TEXT, Optional): New list of attendees email adresses (optional)
* `description` (TEXT, Optional): New description of the event (optional)
* `location` (TEXT, Optional): New location of the event (optional)
* `sendUpdates` (TEXT, Optional): New policy on whether to send notifications ('all', 'externalOnly', 'none') (optional)
* `recurrence` (TEXT, Optional): New list of RRULE, EXRULE, RDATE and EXDATE lines for a recurring event, as specified in RFC5545. Note that DTSTART and DTEND lines are not allowed. The separator between rules is an empty space.
* `eventType` (TEXT, Optional): The type of event to update. Allowed values: default, focusTime, outOfOffice, workingLocation, etc. See Google Calendar API docs for full list. Provide as comma separated list.
**Output:** Returns the updated event with its new details
***
### Get Event
##### `googleanalytics.getEvent`
Gets an event
**Requires Confirmation:** No
**Parameters:**
* `eventId` (TEXT, Required): The id of the specific event to retrieve
* `calendarId` (TEXT, Required): The id of the calendar (can use 'primary' for the user's primary calendar)
**Output:** Returns the event details including all properties and metadata
***
### Search for Events
##### `googleanalytics.searchforEvents`
Gets calendar events by search query
**Requires Confirmation:** No
**Parameters:**
* `maxResults` (TEXT, Optional): Maximum number of results to return (optional, default: 10)
* `searchQuery` (TEXT, Optional): When using the search field for calendar events, input specific and relevant keywords that are likely to appear in the following fields:
IMPORTANT DO NOT IGNORE: if asked to search things like an appointment, meeting, call, ... do not include this term into the search query as these are synonyms for event
* Summary or Title: Include keywords that describe the event, such as "Project Meeting," "Quarterly Review," or "Team Lunch."
* Description: Use terms related to the event's content or purpose, like "budget discussion" or "client presentation."
* Location: Specify the name of the location or building where the event is held, such as "Conference Room A" or "Main Office."
* Attendees: Include names or email addresses of specific attendees, e.g., "[john.doe@example.com](mailto:john.doe@example.com)" or "Jane Smith."
* Organizer: Use the organizer's name or email to find events they are hosting, such as "[organizer@example.com](mailto:organizer@example.com)" or "Michael Johnson."
* Working Location Properties: If applicable, use identifiers like office location labels or building IDs.
* `endDate` (TEXT, Optional): End date of the time period searched for the search query. Upper bound (exclusive) for an event's start time to filter by. Optional.
Must be an RFC3339 timestamp with mandatory time zone offset, for example, 2011-06-03T10:00:00-07:00, 2011-06-03T10:00:00Z. Milliseconds may be provided but are ignored. If Start date is set, End date must be greater than Start date.
* `calendarId` (TEXT, Required): The id of the calendar to search (use 'primary' for user's primary calendar)
* `startDate` (TEXT, Optional): Start date of the time period searched for the search query. Lower bound (exclusive) for an event's end time to filter by. Optional.
Must be an RFC3339 timestamp with mandatory time zone offset, for example, 2011-06-03T10:00:00-07:00, 2011-06-03T10:00:00Z. Milliseconds may be provided but are ignored. If End date is set, Start date must be smaller than End date.
* `desc` (BOOLEAN, Optional): Select if the results be ordered in descending order (start time).
* `eventType` (TEXT, Optional): The type of event to filter for. Allowed values: default, focusTime, outOfOffice, workingLocation, etc. See Google Calendar API docs for full list. Provide only one single event type.
**Output:** Returns an array of events matching the search criteria
***
### Delete Event
##### `googleanalytics.deleteEvent`
Deletes an event
**Requires Confirmation:** Yes
**Parameters:**
* `eventId` (TEXT, Required): id of the event that will be deleted. For event series note that if you provide an event instance id, it will delete just that instance, and if you provide a recurring event's master id, it will delete the entire series.
For an event instance id of a reoccuring event like 7hagg0gtspd2b03lm8i3g4irr0\_20250318T160000Z, the part until the first \_ is the events's master id.
* `calendarId` (TEXT, Required): The id of the calendar (e.g. 'primary' for the main calendar)
**Output:** Returns confirmation of the deletion
***
### List Calendars
##### `googleanalytics.listCalendars`
Lists all calendars accessible to the authenticated user using the Google Calendar API /users/me/calendarList endpoint.
**Requires Confirmation:** No
**Parameters:**
* `maxResults` (TEXT, Optional): Maximum number of entries to return in one result page (default: 100, max: 250).
* `minAccessRole` (SELECT, Optional): Restricts results to calendars where the user has at least this access role. Allowed values: freeBusyReader, reader, writer, owner.
* `pageToken` (TEXT, Optional): Token specifying which result page to return (for pagination).
* `showDeleted` (BOOLEAN, Optional): Whether to include deleted calendar list entries in the result (default: false).
* `showHidden` (BOOLEAN, Optional): Whether to show hidden entries (default: false).
* `syncToken` (TEXT, Optional): For incremental sync, only return entries changed since the previous request with this token. Cannot be used with minAccessRole.
**Output:** Returns an array of calendars with the following structure:
* `id`: Calendar ID
* `summary`: Calendar name
* `description`: Calendar description
* `timeZone`: Calendar time zone
* `accessRole`: User's access role for this calendar
* `backgroundColor`: Calendar color
* `foregroundColor`: Text color for this calendar
* `selected`: Whether this calendar is selected
* `primary`: Whether this is the user's primary calendar
***
### Get Free/Busy for Calendar
##### `googleanalytics.getFreeBusyforCalendar`
Retrieves free/busy information for one or more calendars over a specified time range using the Google Calendar API /freeBusy endpoint.
**Requires Confirmation:** No
**Parameters:**
* `timeMin` (TEXT, Required): RFC3339 timestamp for the start of the time range to check (inclusive). Example: 2025-05-15T08:00:00Z
* `timeMax` (TEXT, Required): RFC3339 timestamp for the end of the time range to check (exclusive). Example: 2025-05-15T18:00:00Z
* `timeZone` (TEXT, Optional): Time zone for the response (optional, defaults to UTC). Example: Europe/Berlin
* `items` (TEXT, Required): List of calendar IDs to check (e.g., emails, resource IDs, or 'primary'). Enter as a comma-separated list.
**Output:** Returns free/busy information with the following structure:
* `kind`: API resource type
* `timeMin`: Start of the queried range
* `timeMax`: End of the queried range
* `calendars`: Object containing free/busy information for each requested calendar
* `busy`: Array of time ranges when the calendar is busy
* `errors`: Any errors encountered for this calendar
#### Triggers
***
### New Event
##### `googleanalytics.newEvent`
Triggers when new calendar events are created in specified calendars
**Requires Confirmation:** No
**Parameters:**
* `calendarId` (TEXT, Optional): ID of the calendar to monitor for new events. Identifies which specific calendar to check for new events
* `daysToInclude` (TEXT, Optional): Number of days in the future to look for events. Default is 30 days
**Output:** Returns information about newly created events
***
### Event Start
##### `googleanalytics.eventStart`
Triggers when events are about to start within a specified time window
**Requires Confirmation:** No
**Parameters:**
* `calendarId` (TEXT, Optional): ID of the calendar to monitor. Defaults to your primary calendar if not specified
* `minuteBefore` (TEXT, Required): Number of minutes before an event starts to trigger the workflow. Default is 15 minutes
**Output:** Returns information about upcoming events
***
### New Event Matching Search
##### `googleanalytics.newEventMatchingSearch`
Triggers when new calendar events matching the specified search query are created
**Requires Confirmation:** No
**Parameters:**
* `calendarId` (TEXT, Optional): ID of the calendar to monitor for new events. Identifies which specific calendar to check for new events
* `daysToInclude` (NUMBER, Optional): Number of days in the future to look for events. Default is 30 days
* `searchQuery` (TEXT, Required): Text to search for in event subjects. Examples: 'Meeting', 'Review', 'Project kickoff'
**Output:** Returns information about events matching the search criteria
***
## Common Use Cases
Manage and organize your Google Analytics data
Automate workflows with Google Analytics
Generate insights and reports
Connect Google Analytics with other tools
## Best Practices
**Getting Started:**
1. Enable the Google Analytics integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Google Analytics integration, contact [support@langdock.com](mailto:support@langdock.com)
# Google Calendar
Source: https://docs.langdock.com/administration/integrations/google-calendar
Google Calendar lets you organize your schedule and share events with co-workers
## Overview
Google Calendar lets you organize your schedule and share events with co-workers. Through Langdock's integration, you can access and manage Google Calendar directly from your conversations.
**Authentication:** OAuth\
**Category:** Google Workspace\
**Availability:** All workspace plans
## Available Actions
### Add Event
##### `googlecalendar.addEvent`
Creates a new event in a specific calendar
**Requires Confirmation:** Yes
**Parameters:**
* `sendUpdates` (TEXT, Optional): Whether to send notifications ('all', 'externalOnly', 'none')
* `startTime` (TEXT, Required): Start time in RFC3339 format (e.g. '2025-03-15T09:00:00+01:00')
* `description` (TEXT, Optional): Description of the event (optional)
* `attendees` (TEXT, Optional): List of attendees email adresses (optional)
* `recurrence` (TEXT, Optional): List of RRULE, EXRULE, RDATE and EXDATE lines for a recurring event, as specified in RFC5545. Note that DTSTART and DTEND lines are not allowed. The separator between rules is an empty space.
* `endTime` (TEXT, Required): End time in RFC3339 format (e.g. '2025-03-15T10:00:00+01:00')
* `calendarId` (TEXT, Required): The id of the calendar (e.g. 'primary' for the main calendar)
* `location` (TEXT, Optional): Location of the event (optional)
* `timeZone` (TEXT, Required): IMPORTANT: If you don't know the time zone of the user, ask him. DO NOT GUESS THE TIME ZONE (Example fromat:. 'America/New\_York')
* `title` (TEXT, Required): Title of the event
* `eventType` (TEXT, Optional): The type of event. Allowed values: default, focusTime, outOfOffice, workingLocation, etc. See Google Calendar API docs for full list. Provide one event type only
**Output:** Returns the created event with the following structure:
* `id`: Event ID
* `summary`: Event title
* `description`: Event description
* `start`: Start time information with dateTime and timeZone
* `end`: End time information with dateTime and timeZone
* `location`: Event location
* `attendees`: Array of attendee objects with email addresses
* `recurrence`: Recurrence rules if applicable
* `htmlLink`: Link to view the event in Google Calendar
* `created`: Creation timestamp
* `updated`: Last update timestamp
***
### Update Event
##### `googlecalendar.updateEvent`
Updates an event
**Requires Confirmation:** Yes
**Parameters:**
* `eventId` (TEXT, Required): id of the event
* `endTime` (TEXT, Optional): End time in RFC3339 format (e.g. '2025-03-15T10:00:00+01:00')
* `startTime` (TEXT, Optional): New start time in RFC3339 format (e.g. '2025-03-15T09:00:00+01:00')
* `calendarId` (TEXT, Required): The id of the calendar (e.g. 'primary' for the main calendar)
* `timeZone` (TEXT, Optional): New time zone (e.g. 'America/New\_York')
* `eventTitle` (TEXT, Optional): New title of the event (optional)
* `attendees` (TEXT, Optional): New list of attendees email adresses (optional)
* `description` (TEXT, Optional): New description of the event (optional)
* `location` (TEXT, Optional): New location of the event (optional)
* `sendUpdates` (TEXT, Optional): New policy on whether to send notifications ('all', 'externalOnly', 'none') (optional)
* `recurrence` (TEXT, Optional): New list of RRULE, EXRULE, RDATE and EXDATE lines for a recurring event, as specified in RFC5545. Note that DTSTART and DTEND lines are not allowed. The separator between rules is an empty space.
* `eventType` (TEXT, Optional): The type of event to update. Allowed values: default, focusTime, outOfOffice, workingLocation, etc. See Google Calendar API docs for full list. Provide as comma separated list.
**Output:** Returns the updated event with its new details
***
### Get Event
##### `googlecalendar.getEvent`
Gets an event
**Requires Confirmation:** No
**Parameters:**
* `eventId` (TEXT, Required): The id of the specific event to retrieve
* `calendarId` (TEXT, Required): The id of the calendar (can use 'primary' for the user's primary calendar)
**Output:** Returns the event details including all properties and metadata
***
### Search for Events
##### `googlecalendar.searchforEvents`
Gets calendar events by search query
**Requires Confirmation:** No
**Parameters:**
* `maxResults` (TEXT, Optional): Maximum number of results to return (optional, default: 10)
* `searchQuery` (TEXT, Optional): When using the search field for calendar events, input specific and relevant keywords that are likely to appear in the following fields:
IMPORTANT DO NOT IGNORE: if asked to search things like an appointment, meeting, call, ... do not include this term into the search query as these are synonyms for event
* Summary or Title: Include keywords that describe the event, such as "Project Meeting," "Quarterly Review," or "Team Lunch."
* Description: Use terms related to the event's content or purpose, like "budget discussion" or "client presentation."
* Location: Specify the name of the location or building where the event is held, such as "Conference Room A" or "Main Office."
* Attendees: Include names or email addresses of specific attendees, e.g., "[john.doe@example.com](mailto:john.doe@example.com)" or "Jane Smith."
* Organizer: Use the organizer's name or email to find events they are hosting, such as "[organizer@example.com](mailto:organizer@example.com)" or "Michael Johnson."
* Working Location Properties: If applicable, use identifiers like office location labels or building IDs.
* `endDate` (TEXT, Optional): End date of the time period searched for the search query. Upper bound (exclusive) for an event's start time to filter by. Optional.
Must be an RFC3339 timestamp with mandatory time zone offset, for example, 2011-06-03T10:00:00-07:00, 2011-06-03T10:00:00Z. Milliseconds may be provided but are ignored. If Start date is set, End date must be greater than Start date.
* `calendarId` (TEXT, Required): The id of the calendar to search (use 'primary' for user's primary calendar)
* `startDate` (TEXT, Optional): Start date of the time period searched for the search query. Lower bound (exclusive) for an event's end time to filter by. Optional.
Must be an RFC3339 timestamp with mandatory time zone offset, for example, 2011-06-03T10:00:00-07:00, 2011-06-03T10:00:00Z. Milliseconds may be provided but are ignored. If End date is set, Start date must be smaller than End date.
* `desc` (BOOLEAN, Optional): Select if the results be ordered in descending order (start time).
* `eventType` (TEXT, Optional): The type of event to filter for. Allowed values: default, focusTime, outOfOffice, workingLocation, etc. See Google Calendar API docs for full list. Provide only one single event type.
**Output:** Returns an array of events matching the search criteria
***
### Delete Event
##### `googlecalendar.deleteEvent`
Deletes an event
**Requires Confirmation:** Yes
**Parameters:**
* `eventId` (TEXT, Required): id of the event that will be deleted. For event series note that if you provide an event instance id, it will delete just that instance, and if you provide a recurring event's master id, it will delete the entire series.
For an event instance id of a reoccuring event like 7hagg0gtspd2b03lm8i3g4irr0\_20250318T160000Z, the part until the first \_ is the events's master id.
* `calendarId` (TEXT, Required): The id of the calendar (e.g. 'primary' for the main calendar)
**Output:** Returns confirmation of the deletion
***
### List Calendars
##### `googlecalendar.listCalendars`
Lists all calendars accessible to the authenticated user using the Google Calendar API /users/me/calendarList endpoint.
**Requires Confirmation:** No
**Parameters:**
* `maxResults` (TEXT, Optional): Maximum number of entries to return in one result page (default: 100, max: 250).
* `minAccessRole` (SELECT, Optional): Restricts results to calendars where the user has at least this access role. Allowed values: freeBusyReader, reader, writer, owner.
* `pageToken` (TEXT, Optional): Token specifying which result page to return (for pagination).
* `showDeleted` (BOOLEAN, Optional): Whether to include deleted calendar list entries in the result (default: false).
* `showHidden` (BOOLEAN, Optional): Whether to show hidden entries (default: false).
* `syncToken` (TEXT, Optional): For incremental sync, only return entries changed since the previous request with this token. Cannot be used with minAccessRole.
**Output:** Returns an array of calendars with the following structure:
* `id`: Calendar ID
* `summary`: Calendar name
* `description`: Calendar description
* `timeZone`: Calendar time zone
* `accessRole`: User's access role for this calendar
* `backgroundColor`: Calendar color
* `foregroundColor`: Text color for this calendar
* `selected`: Whether this calendar is selected
* `primary`: Whether this is the user's primary calendar
***
### Get Free/Busy for Calendar
##### `googlecalendar.getFreeBusyforCalendar`
Retrieves free/busy information for one or more calendars over a specified time range using the Google Calendar API /freeBusy endpoint.
**Requires Confirmation:** No
**Parameters:**
* `timeMin` (TEXT, Required): RFC3339 timestamp for the start of the time range to check (inclusive). Example: 2025-05-15T08:00:00Z
* `timeMax` (TEXT, Required): RFC3339 timestamp for the end of the time range to check (exclusive). Example: 2025-05-15T18:00:00Z
* `timeZone` (TEXT, Optional): Time zone for the response (optional, defaults to UTC). Example: Europe/Berlin
* `items` (TEXT, Required): List of calendar IDs to check (e.g., emails, resource IDs, or 'primary'). Enter as a comma-separated list.
**Output:** Returns free/busy information with the following structure:
* `kind`: API resource type
* `timeMin`: Start of the queried range
* `timeMax`: End of the queried range
* `calendars`: Object containing free/busy information for each requested calendar
* `busy`: Array of time ranges when the calendar is busy
* `errors`: Any errors encountered for this calendar
#### Triggers
***
### New Event
##### `googlecalendar.newEvent`
Triggers when new calendar events are created in specified calendars
**Requires Confirmation:** No
**Parameters:**
* `calendarId` (TEXT, Optional): ID of the calendar to monitor for new events. Identifies which specific calendar to check for new events
* `daysToInclude` (TEXT, Optional): Number of days in the future to look for events. Default is 30 days
**Output:** Returns information about newly created events
***
### Event Start
##### `googlecalendar.eventStart`
Triggers when events are about to start within a specified time window
**Requires Confirmation:** No
**Parameters:**
* `calendarId` (TEXT, Optional): ID of the calendar to monitor. Defaults to your primary calendar if not specified
* `minuteBefore` (TEXT, Required): Number of minutes before an event starts to trigger the workflow. Default is 15 minutes
**Output:** Returns information about upcoming events
***
### New Event Matching Search
##### `googlecalendar.newEventMatchingSearch`
Triggers when new calendar events matching the specified search query are created
**Requires Confirmation:** No
**Parameters:**
* `calendarId` (TEXT, Optional): ID of the calendar to monitor for new events. Identifies which specific calendar to check for new events
* `daysToInclude` (NUMBER, Optional): Number of days in the future to look for events. Default is 30 days
* `searchQuery` (TEXT, Required): Text to search for in event subjects. Examples: 'Meeting', 'Review', 'Project kickoff'
**Output:** Returns information about events matching the search criteria
***
## Common Use Cases
Manage and organize your Google Calendar data
Automate workflows with Google Calendar
Generate insights and reports
Connect Google Calendar with other tools
## Best Practices
**Getting Started:**
1. Enable the Google Calendar integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Google Calendar integration, contact [support@langdock.com](mailto:support@langdock.com)
# Google Docs
Source: https://docs.langdock.com/administration/integrations/google-docs
Integration for Google Docs
## Overview
Integration for Google Docs. Through Langdock's integration, you can access and manage Google Docs directly from your conversations.
**Authentication:** OAuth **Category:** Google Workspace **Availability:** All
workspace plans
## Available Actions
### Get Document
##### `googledocs.getDocument`
Retrieve Google Docs content with flexible options
**Requires Confirmation:** No
**Parameters:**
* `documentId` (TEXT, Required): The ID of the Google Doc to retrieve. This is the string of characters in the URL after 'document/d/' when viewing the document
* `extractPlainTextOnly` (BOOLEAN, Optional): When true, returns only the plain text content of the document without formatting
* `metadataOnly` (BOOLEAN, Optional): When true, returns only the document metadata without the full content
* `includeTabsContent` (BOOLEAN, Optional): When true, returns document with tabs structure populated. When false or unspecified, returns content from the first tab only
**Output:** Returns document content with the following structure:
* `documentId`: Document ID
* `title`: Document title
* `body`: Document body content (if not metadata only)
* `plainText`: Plain text content (if extractPlainTextOnly is true)
* `viewUrl`: URL to view the document in Google Docs
* `retrievedAt`: Timestamp when the document was retrieved
* `hasMultipleTabs`: Whether the document has multiple tabs (if includeTabsContent is true)
* `tabs`: Array of tab objects (if includeTabsContent is true)
* `metadata`: Document metadata including creation time, modification time, owners, etc.
***
### Search File
##### `googledocs.searchFile`
Search for Google Docs documents in your Google Drive using flexible filters such as document name, owner, modification date, folder, and sharing status
**Requires Confirmation:** No
**Parameters:**
* `nameContains` (TEXT, Optional): Only return documents whose name contains this text. Partial and case-insensitive matches are allowed
* `owner` (TEXT, Optional): Only return documents owned by this email address. Leave blank to include documents from any owner
* `modifiedAfter` (TEXT, Optional): Only return documents modified after this date (inclusive). Use ISO format (e.g., 2024-05-01)
* `modifiedBefore` (TEXT, Optional): Only return documents modified before this date (exclusive). Use ISO format (e.g., 2024-06-01)
* `folder` (TEXT, Optional): Only return documents located in this folder. Provide the folder ID. Leave blank for all folders
* `maximumResults` (TEXT, Optional): The maximum number of documents to return. Leave blank to use the default (5000). The maximum allowed is 10,000
**Output:** Returns an array of document search results with the following structure:
* `id`: Document ID
* `name`: Document name
* `mimeType`: Document MIME type
* `createdTime`: Creation timestamp
* `modifiedTime`: Last modification timestamp
* `owners`: Array of owner information
* `webViewLink`: Link to view the document
* `size`: File size in bytes
* `description`: Document description
* `properties`: Custom properties
* `appProperties`: Application-specific properties
***
### Update Document
##### `googledocs.updateDocument`
Update a Google Docs document
**Requires Confirmation:** No
**Parameters:**
* `documentId` (TEXT, Required): The ID of the Google Doc to update. This is the string of characters in the URL after 'document/d/' when viewing the document, you can also retrieve it via the search file tool
* `markdownText` (MULTI\_LINE\_TEXT, Optional): Optional raw Markdown to append and convert to native Google Docs formatting (supports headings #..######, lists -, \*, 1., and bold **text**).
* `tab_id` (TEXT, Optional): Optional: ID of the tab to target. If omitted, defaults to first tab.
* `tab_title` (TEXT, Optional): Optional: Title of the tab to target (used if Tab ID is not provided).
* `anchor_text` (TEXT, Optional): Optional: Insert content immediately after the paragraph that contains this text in the selected tab. If not provided, content is appended to the end of the tab.
* `anchor_match_case` (BOOLEAN, Optional): Optional: When true, the anchor text match is case-sensitive.
* `actions` (OBJECT, Optional): Available Actions:
Text Operations
Inserting Text
You can insert text into a document in two ways. The first approach uses a specific index position where you know content already exists. Provide the exact character index and the text string you want to insert. For example, inserting at index 5 would place your text after the fifth character in the document. The second and safer approach is to append text at the end of the current segment using the endOfSegmentLocation parameter along with your text string. This prevents index-related errors when you're not certain about the document's structure.
Replacing Text
To find and replace text throughout the document, you need to specify what to find (the containsText parameter) and what to replace it with (the replaceText parameter). Within the containsText object, you must provide the text string to search for, and you can optionally set matchCase to true if you want case-sensitive replacement. This operation affects all matching occurrences throughout the document.
Deleting Content
To remove content from a document, you must define a range with both startIndex and endIndex parameters. The operation will delete all content starting from the character at startIndex up to (but not including) the character at endIndex. Both indices must refer to existing content in the document.
Formatting Operations
Styling Text
Text styling requires three key components: a range (with startIndex and endIndex identifying the text to format), a textStyle object containing the style properties you want to change, and a fields parameter listing which specific style properties to update. For example, you might specify bold, fontSize, or foregroundColor. Only the properties listed in the fields parameter will be changed, allowing for targeted formatting.
Styling Paragraphs
Similar to text styling, paragraph formatting requires a range that encompasses the paragraphs to format, a paragraphStyle object with properties like alignment or indentation, and a fields parameter listing which paragraph style properties to update. The range must refer to entire paragraphs, not partial text within paragraphs.
Document-Wide Styling
For document-level formatting, you need a documentStyle object containing the properties you want to change, such as background color or page size, and a fields parameter listing which document properties to update. For background color specifically, you must use a nested structure defining the RGB color values. This operation affects the entire document's appearance.
List Operations
Creating Bullet or Numbered Lists
To convert existing paragraphs into a list, you need a range (with startIndex and endIndex) that covers all paragraphs you want to include, and a bulletPreset parameter defining the list style. Available presets include disc/circle/square bullets, checkbox bullets, decimal/alphabetical/Roman numeral numbering, and various combinations with different nesting styles. The range must include only complete paragraphs, and text should be inserted before applying list formatting.
Removing List Formatting
To convert bulleted or numbered paragraphs back to regular paragraphs, specify a range (with startIndex and endIndex) covering the list items you want to modify. This operation removes all bullet or numbering formatting while preserving the paragraph text and other formatting.
Structural Elements
Creating Tables
To insert a table, you need to specify the number of rows and columns (both must be at least 1) and the endOfSegmentLocation where the table should be placed. After creating a table, you should retrieve the document structure to get the indices for individual cells before attempting to add content or styling to those cells.
Inserting Table Rows
To insert a new row in a table, specify a tableCellLocation with the table's start index and the row/column indices of a reference cell. Use insertBelow to control whether the row is inserted above or below the reference cell.
Inserting Table Columns
To insert a new column in a table, specify a tableCellLocation with the table's start index and the row/column indices of a reference cell. Use insertRight to control whether the column is inserted to the left or right of the reference cell.
Deleting Table Rows
To delete rows from a table, provide the tableRowIndex (0-based index of the row to delete) and the tableStartLocation with the index of where the table begins in the document.
Deleting Table Columns
To delete columns from a table, provide the tableColumnIndex (0-based index of the column to delete) and the tableStartLocation with the index of where the table begins in the document.
Inserting Images
Image insertion requires a location index where the image should appear, a URI pointing to the image source, and an objectSize parameter with width and height dimensions. The dimensions should include both magnitude (numerical value) and unit (such as "pt" for points). Images can be inserted at any valid index within the document.
Adding Page Breaks
To insert a page break, simply specify a location index where the break should occur. The index must refer to a valid position within existing document content, typically at the end of a paragraph.
Creating Headers
To add a header to your document, specify the type as either "DEFAULT" (appearing on most pages), "FIRST\_PAGE" (only on the first page), or "EVEN\_PAGE" (only on even-numbered pages). After creating a header, you'll need to retrieve the document to get the header's ID before you can add content to it.
Creating Footers
Footer creation works identically to headers. Specify the type as "DEFAULT", "FIRST\_PAGE", or "EVEN\_PAGE" to determine where the footer appears. As with headers, you'll need to retrieve the document after creation to get the footer's ID for adding content.
Adding Footnotes
To insert a footnote, provide a location index where the footnote reference should appear in the main text. The footnote will be created with empty content, which you can populate in a subsequent operation after retrieving the document structure.
Reference Operations
Creating Named Ranges
A named range allows you to mark a section of text for easy reference later. You need to provide a descriptive name for the range and the startIndex and endIndex parameters that define the range boundaries. After creating a named range, you'll need to retrieve the document to get the generated range ID for future operations targeting that range.
Each of these operations can be combined in a batch update request to efficiently make multiple changes to a document in a single API call. However, some operations have dependencies - for example, you should insert text before formatting it, create tables before adding content to cells, and create headers or footers before populating them with text.
**Output:** Returns confirmation of the document update
***
## Common Use Cases
Manage and organize your Google Docs data
Automate workflows with Google Docs
Generate insights and reports
Connect Google Docs with other tools
## Best Practices
**Getting Started:** 1. Enable the Google Docs integration in your workspace
settings 2. Authenticate using OAuth 3. Test the connection with a simple read
operation 4. Explore available actions for your use case
**Important Considerations:** - Ensure proper authentication credentials -
Respect rate limits and API quotas - Review data privacy settings - Test
operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Google Docs integration, contact [support@langdock.com](mailto:support@langdock.com)
# Google Drive
Source: https://docs.langdock.com/administration/integrations/google-drive
Cloud storage service for file backup, sharing, and collaboration
## Overview
Cloud storage service for file backup, sharing, and collaboration. Through Langdock's integration, you can access and manage Google Drive directly from your conversations.
**Authentication:** OAuth\
**Category:** Google Workspace\
**Availability:** All workspace plans
## Available Actions
### Download File
##### `googledrive.downloadFile`
Downloads the contents of a file from Google Drive based on its file id
**Requires Confirmation:** No
**Parameters:**
* `itemId` (TEXT, Required): The unique identifier of the file you want to download from Google Drive
**Output:** Returns the file content as a downloadable file
***
### Get Current User
##### `googledrive.getCurrentUser`
Retrieves information about the currently logged-in Google Drive user
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns user information including:
* `id`: User ID
* `email`: User email address
* `name`: User display name
* `picture`: User profile picture URL
* `verified_email`: Whether the email is verified
***
### Search Files
##### `googledrive.searchFiles`
Searches through files in your Google Drive using simple text queries
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Optional): A query string for filtering the file results. If no query string is passed, it returns the most recent files. This searches through the full text and the titles of the files
**Output:** Returns an array of files matching the search criteria
***
### Get Folder
##### `googledrive.getFolder`
Searches for folders in Google Drive by name
**Requires Confirmation:** No
**Parameters:**
* `folderName` (TEXT, Required): Search term used to find folders that contain this text in their names. For example, you can search for 'Projects', 'Marketing', or 'Documents' to find folders with those terms in their names
**Output:** Returns an array of folders with the following structure:
* `id`: Folder ID
* `name`: Folder name
* `mimeType`: Folder MIME type (application/vnd.google-apps.folder)
* `parents`: Array of parent folder IDs
* `createdTime`: Creation timestamp
* `modifiedTime`: Last modification timestamp
* `webViewLink`: Link to view the folder in Google Drive
***
### Search Files (Advanced)
##### `googledrive.searchFilesAdvanced`
Searches through the available files in your Google Drive with advanced filtering options
**Requires Confirmation:** No
**Parameters:**
* `pageToken` (TEXT, Optional): The token for continuing a previous list request on the next page. This should be set to the value of 'nextPageToken' from the previous response
* `query` (TEXT, Optional): A query string for filtering the file results. If user asks to get recent files without a specific search query, leave this field empty.
* `orderBy` (TEXT, Optional): A comma-separated list of sort keys. Valid keys are:
createdTime: When the file was created.
folder: The folder ID. This field is sorted using alphabetical ordering.
modifiedByMeTime: The last time the file was modified by the user.
modifiedTime: The last time the file was modified by anyone.
name: The name of the file. This field is sorted using alphabetical ordering, so 1, 12, 2, 22.
name\_natural: The name of the file. This field is sorted using natural sort ordering, so 1, 2, 12, 22.
quotaBytesUsed: The number of storage quota bytes used by the file.
recency: The most recent timestamp from the file's date-time fields.
sharedWithMeTime: When the file was shared with the user, if applicable.
starred: Whether the user has starred the file.
viewedByMeTime: The last time the file was viewed by the user.
Each key sorts ascending by default, but can be reversed with the 'desc' modifier. Example usage: folder,modifiedTime desc,name
* `folderId` (TEXT, Optional): Unique identifier of the folder in which you want to search.
**Output:** Returns search results with the following structure:
* `files`: Array of file objects containing:
* `id`: File ID
* `name`: File name
* `mimeType`: File MIME type
* `createdTime`: Creation timestamp
* `modifiedTime`: Last modification timestamp
* `size`: File size in bytes
* `webViewLink`: Link to view the file
* `owners`: Array of owner information
* `parents`: Array of parent folder IDs
* `nextPageToken`: Token for pagination (if more results available)
***
### Download Google Drive File
##### `googledrive.downloadGoogleDriveFile`
Downloads the contents of a file from Google Drive based on its file id
**Requires Confirmation:** No
**Parameters:**
* `itemId` (TEXT, Required): The unique identifier of the file you want to download from Google Drive.
**Output:** Returns the file content as a downloadable file
***
### List Files in Folder
##### `googledrive.listFilesinFolder`
Lists all files in a Google Drive folder including subfolders, limited to the first 200 files
**Requires Confirmation:** No
**Parameters:**
* `folderId` (TEXT, Required): The unique identifier of the Google Drive folder to list files from. This will include files from the folder and all its subfolders
**Output:** Returns an array of files in the folder with their details
***
### Upload File
##### `googledrive.uploadFile`
Upload a file to Google Drive with optional folder destination
**Requires Confirmation:** No
**Parameters:**
* `file` (FILE, Required): The file to upload to Google Drive
* `folderId` (TEXT, Optional): The ID of the folder where you want to upload the file. If not provided, the file will be uploaded to the root of your Google Drive
* `fileName` (TEXT, Optional): Optional custom name for the file. If not provided, the original filename will be used
**Output:** Returns the uploaded file information including:
* `id`: File ID
* `name`: File name
* `mimeType`: File MIME type
* `size`: File size in bytes
* `webViewLink`: Link to view the file
* `createdTime`: Upload timestamp
#### Triggers
***
### New File
##### `googledrive.newFile`
Triggers when new files are added to Google Drive
**Requires Confirmation:** No
**Parameters:**
* `folderIds` (TEXT, Optional): Comma-separated list of folder IDs to monitor for new files
**Output:** Returns information about newly added files
***
### Updated File
##### `googledrive.updatedFile`
Triggers when files are updated in Google Drive
**Requires Confirmation:** No
**Parameters:**
* `fileIds` (TEXT, Optional): Comma-separated list of file IDs to monitor for updates
* `folderIds` (TEXT, Optional): Comma-separated list of folder IDs to monitor for updates
**Output:** Returns information about updated files
***
### New Folder
##### `googledrive.newFolder`
Triggers when new folders are added to Google Drive
**Requires Confirmation:** No
**Parameters:**
* `parentFolderId` (TEXT, Optional): Comma-separated list of parent folder IDs to monitor for new folders
**Output:** Returns information about newly created folders
***
## Common Use Cases
Manage and organize your Google Drive data
Automate workflows with Google Drive
Generate insights and reports
Connect Google Drive with other tools
## Best Practices
**Getting Started:**
1. Enable the Google Drive integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Google Drive integration, contact [support@langdock.com](mailto:support@langdock.com)
# Google Meet
Source: https://docs.langdock.com/administration/integrations/google-meet
Real-time meetings by Google
## Overview
Real-time meetings by Google. Through Langdock's integration, you can access and manage Google Meet directly from your conversations.
**Authentication:** OAuth\
**Category:** Google Workspace\
**Availability:** All workspace plans
## Available Actions
### Get Meeting Details
##### `googlemeet.getMeetingDetails`
Gets the details of a meeting from the event ID
**Requires Confirmation:** No
**Parameters:**
* `eventId` (TEXT, Required): The event id of the meeting
**Output:** Returns meeting details with the following structure:
* `conferenceRecords`: Array of conference record objects containing:
* `name`: Conference record name
* `startTime`: Meeting start time
* `endTime`: Meeting end time
* `space`: Space information including meeting code
* `activeParticipantCount`: Number of active participants
* `maxParticipantCount`: Maximum number of participants
* `recordedDuration`: Duration of recording if available
* `state`: Conference state (active, ended, etc.)
***
### Get Meeting Transcription
##### `googlemeet.getMeetingTranscription`
Gets the available transcripts of a meeting
**Requires Confirmation:** No
**Parameters:**
* `conferenceRecordId` (TEXT, Required): The conference record id of the meeting
**Output:** Returns meeting transcription information including:
* `transcriptDocuments`: Array of transcript documents containing:
* `name`: Transcript document name
* `driveFile`: Drive file information for the transcript
* `exportUri`: Export URI for the transcript
* `state`: Transcript state (active, completed, etc.)
* `createTime`: Creation timestamp
* `endTime`: End timestamp
***
### Create Rows
##### `googlemeet.createRows`
Creates new rows in a specific spreadsheet
**Requires Confirmation:** Yes
**Parameters:**
* `amount` (NUMBER, Required): Amount of rows to be inserted starting at index
* `startIndex` (TEXT, Optional): Default: 0
* `sheetId` (NUMBER, Required): ID of the sheet (numeric). Default: 0 (first sheet)
* `spreadsheetId` (TEXT, Required): ID of the spreadsheet
**Output:** Returns batch update response with details about the inserted rows
***
### Update Spreadsheet Rows
##### `googlemeet.updateSpreadsheetRows`
Updates rows in a specific spreadsheet
**Requires Confirmation:** Yes
**Parameters:**
* `spreadsheetId` (TEXT, Required): ID of the spreadsheet
* `range` (TEXT, Optional): A1 Notation to define range and sheet in which values can be inserted into rows. Default = entire Sheet1
* `valueInput` (MULTI\_LINE\_TEXT, Required): Values that should be inserted in the specific rows. Please use CSV Notation
**Output:** Returns update response with details about the modified rows
***
### Append Rows to Spreadsheet
##### `googlemeet.appendRowstoSpreadsheet`
Appends new rows to the end of a spreadsheet, automatically finding the last row with data
**Requires Confirmation:** Yes
**Parameters:**
* `spreadsheetId` (TEXT, Required): ID of the spreadsheet
* `range` (TEXT, Optional): A1 Notation to define the range to search for a table of data. Default: Sheet1 (entire sheet)
* `valueInput` (MULTI\_LINE\_TEXT, Required): Values that should be appended to the spreadsheet. Please use CSV Notation
**Output:** Returns append response with details about the added rows
***
### List Spreadsheet Row
##### `googlemeet.listSpreadsheetRow`
Lists a specific spreadsheet row based on the row number
**Requires Confirmation:** No
**Parameters:**
* `rowNumber` (NUMBER, Required): The row that should be returned. Indexing starts at 1
* `sheetName` (TEXT, Optional): Name of the sheet (tab/page) in the spreadsheet. Default: Sheet1
* `spreadsheetId` (TEXT, Required): ID of the spreadsheet
**Output:** Returns the specified row data with all cell values
***
### List Spreadsheet Row Range
##### `googlemeet.listSpreadsheetRowRange`
Lists multiple spreadsheet rows based on a range
**Requires Confirmation:** No
**Parameters:**
* `sheetAndRange` (TEXT, Optional): A1 / R1C1 notation of the referenced sheet and range. Default: Sheet1, entire sheet
* `spreadsheetId` (TEXT, Required): The ID of the spreadsheet that should be edited
**Output:** Returns array of rows with cell values in the specified range
***
### Clear Spreadsheet Rows
##### `googlemeet.clearSpreadsheetRows`
Clears the content of the selected rows while keeping the rows intact in the workspace
**Requires Confirmation:** Yes
**Parameters:**
* `spreadsheetId` (TEXT, Required): ID of the spreadsheet
* `sheetAndRange` (TEXT, Optional): A1 / R1C1 notation of the referenced sheet and range. Default: Sheet1, entire sheet
**Output:** Returns clear response with details about the cleared range
***
### Get Spreadsheet Metadata
##### `googlemeet.getSpreadsheetMetadata`
Retrieves essential spreadsheet metadata including title, locale, timezone, and sheet properties
**Requires Confirmation:** No
**Parameters:**
* `spreadsheetId` (TEXT, Required): ID of the spreadsheet to get metadata for
**Output:** Returns spreadsheet metadata including title, locale, timezone, and sheet properties (IDs, names, dimensions)
***
### Delete Spreadsheet Rows
##### `googlemeet.deleteSpreadsheetRows`
Deletes a range of rows in a specific spreadsheet
**Requires Confirmation:** Yes
**Parameters:**
* `rowRangeEndIndex` (NUMBER, Required): End index of row range that should be deleted. Row indexing starts at 0
* `spreadsheetId` (TEXT, Required): ID of the spreadsheet
* `sheetId` (NUMBER, Required): ID of the sheet where the rows should be deleted
* `rowRangeStartIndex` (NUMBER, Required): Start index of row range that should be deleted. Row indexing starts at 0
**Output:** Returns delete response with details about the removed rows
***
## Common Use Cases
Manage and organize your Google Meet data
Automate workflows with Google Meet
Generate insights and reports
Connect Google Meet with other tools
## Best Practices
**Getting Started:**
1. Enable the Google Meet integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Google Meet integration, contact [support@langdock.com](mailto:support@langdock.com)
# Google Sheets
Source: https://docs.langdock.com/administration/integrations/google-sheets
Manage and analyze data in Google's spreadsheets solution
## Overview
Manage and analyze data in Google's spreadsheets solution. Through Langdock's integration, you can access and manage Google Sheets directly from your conversations.
**Authentication:** OAuth\
**Category:** Google Workspace\
**Availability:** All workspace plans
## Available Actions
### Create Rows
##### `googlesheets.createRows`
Creates new rows in a specific spreadsheet
**Requires Confirmation:** Yes
**Parameters:**
* `amount` (NUMBER, Required): Amount of rows to be inserted starting at index
* `startIndex` (TEXT, Optional): Default: 0
* `sheetId` (NUMBER, Required): ID of the sheet (numeric). Default: 0 (first sheet)
* `spreadsheetId` (TEXT, Required): ID of the spreadsheet
**Output:** Returns batch update response with details about the inserted rows
***
### Update Spreadsheet Rows
##### `googlesheets.updateSpreadsheetRows`
Updates rows in a specific spreadsheet
**Requires Confirmation:** Yes
**Parameters:**
* `spreadsheetId` (TEXT, Required): ID of the spreadsheet
* `range` (TEXT, Optional): A1 Notation to define range and sheet in which values can be inserted into rows. Default = entire Sheet1
* `valueInput` (MULTI\_LINE\_TEXT, Required): Values that should be inserted in the specific rows. Please use CSV Notation
**Output:** Returns update response with details about the modified rows
***
### Append Rows to Spreadsheet
##### `googlesheets.appendRowstoSpreadsheet`
Appends new rows to the end of a spreadsheet, automatically finding the last row with data
**Requires Confirmation:** Yes
**Parameters:**
* `spreadsheetId` (TEXT, Required): ID of the spreadsheet
* `range` (TEXT, Optional): A1 Notation to define the range to search for a table of data. Default: Sheet1 (entire sheet)
* `valueInput` (MULTI\_LINE\_TEXT, Required): Values that should be appended to the spreadsheet. Please use CSV Notation
**Output:** Returns append response with details about the added rows
***
### List Spreadsheet Row
##### `googlesheets.listSpreadsheetRow`
Lists a specific spreadsheet row based on the row number
**Requires Confirmation:** No
**Parameters:**
* `rowNumber` (NUMBER, Required): The row that should be returned. Indexing starts at 1
* `sheetName` (TEXT, Optional): Name of the sheet (tab/page) in the spreadsheet. Default: Sheet1
* `spreadsheetId` (TEXT, Required): ID of the spreadsheet
**Output:** Returns the specified row data with all cell values
***
### List Spreadsheet Row Range
##### `googlesheets.listSpreadsheetRowRange`
Lists multiple spreadsheet rows based on a range
**Requires Confirmation:** No
**Parameters:**
* `sheetAndRange` (TEXT, Optional): A1 / R1C1 notation of the referenced sheet and range. Default: Sheet1, entire sheet
* `spreadsheetId` (TEXT, Required): The ID of the spreadsheet that should be edited
**Output:** Returns array of rows with cell values in the specified range
***
### Clear Spreadsheet Rows
##### `googlesheets.clearSpreadsheetRows`
Clears the content of the selected rows while keeping the rows intact in the workspace
**Requires Confirmation:** Yes
**Parameters:**
* `spreadsheetId` (TEXT, Required): ID of the spreadsheet
* `sheetAndRange` (TEXT, Optional): A1 / R1C1 notation of the referenced sheet and range. Default: Sheet1, entire sheet
**Output:** Returns clear response with details about the cleared range
***
### Get Spreadsheet Metadata
##### `googlesheets.getSpreadsheetMetadata`
Retrieves essential spreadsheet metadata including title, locale, timezone, and sheet properties
**Requires Confirmation:** No
**Parameters:**
* `spreadsheetId` (TEXT, Required): ID of the spreadsheet to get metadata for
**Output:** Returns spreadsheet metadata including title, locale, timezone, and sheet properties (IDs, names, dimensions)
***
### Delete Spreadsheet Rows
##### `googlesheets.deleteSpreadsheetRows`
Deletes a range of rows in a specific spreadsheet
**Requires Confirmation:** Yes
**Parameters:**
* `rowRangeEndIndex` (NUMBER, Required): End index of row range that should be deleted. Row indexing starts at 0
* `spreadsheetId` (TEXT, Required): ID of the spreadsheet
* `sheetId` (NUMBER, Required): ID of the sheet where the rows should be deleted
* `rowRangeStartIndex` (NUMBER, Required): Start index of row range that should be deleted. Row indexing starts at 0
**Output:** Returns delete response with details about the removed rows
***
## Common Use Cases
Manage and organize your Google Sheets data
Automate workflows with Google Sheets
Generate insights and reports
Connect Google Sheets with other tools
## Best Practices
**Getting Started:**
1. Enable the Google Sheets integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Google Sheets integration, contact [support@langdock.com](mailto:support@langdock.com)
# Google Tasks
Source: https://docs.langdock.com/administration/integrations/google-tasks
Google Tasks lets you manage your to-do lists and tasks across all your devices
## Overview
Google Tasks lets you manage your to-do lists and tasks across all your devices. Through Langdock's integration, you can access and manage Google Tasks directly from your conversations.
**Authentication:** OAuth\
**Category:** Google Workspace\
**Availability:** All workspace plans
## Available Actions
### List Task Lists
##### `googletasks.listTaskLists`
Get all task lists for the authenticated user
**Requires Confirmation:** No
**Parameters:**
* `maxResults` (NUMBER, Optional): Maximum number of task lists to return (optional, default 100)
**Output:** Returns array of task lists with their IDs, titles, and metadata
***
### Create Task List
##### `googletasks.createTaskList`
Create a new task list
**Requires Confirmation:** No
**Parameters:**
* `title` (TEXT, Required): The title of the task list
**Output:** Returns the created task list with ID and title
***
### Get Task List
##### `googletasks.getTaskList`
Get details of a specific task list
**Requires Confirmation:** No
**Parameters:**
* `taskListId` (TEXT, Required): The unique identifier of the task list
**Output:** Returns task list details including ID, title, and metadata
***
### Update Task List
##### `googletasks.updateTaskList`
Update the title of a task list
**Requires Confirmation:** No
**Parameters:**
* `taskListId` (TEXT, Required): The unique identifier of the task list to update
* `title` (TEXT, Required): The new title for the task list
**Output:** Returns the updated task list with new title
***
### Delete Task List
##### `googletasks.deleteTaskList`
Delete a task list and all its tasks
**Requires Confirmation:** No
**Parameters:**
* `taskListId` (TEXT, Required): The unique identifier of the task list to delete
**Output:** Returns confirmation of deletion
***
### List Tasks
##### `googletasks.listTasks`
Get all tasks from a specific task list
**Requires Confirmation:** No
**Parameters:**
* `taskListId` (TEXT, Required): The unique identifier of the task list
* `showCompleted` (SELECT, Optional): Whether to include completed tasks (No/Yes)
* `showHidden` (SELECT, Optional): Whether to include hidden tasks (No/Yes)
* `maxResults` (NUMBER, Optional): Maximum number of tasks to return (optional, default 100)
**Output:** Returns array of tasks with their details including ID, title, status, due date, and notes
***
### Create Task
##### `googletasks.createTask`
Create a new task in a specific task list
**Requires Confirmation:** No
**Parameters:**
* `taskListId` (TEXT, Required): The unique identifier of the task list
* `title` (TEXT, Required): The title of the task
* `notes` (TEXT, Optional): Additional details about the task
* `due` (TEXT, Optional): Due date in RFC3339 format (e.g. '2025-12-31T23:59:59Z')
* `parent` (TEXT, Optional): ID of the parent task to create a subtask
**Output:** Returns the created task with ID, title, and all specified details
***
### Get Task
##### `googletasks.getTask`
Get details of a specific task
**Requires Confirmation:** No
**Parameters:**
* `taskListId` (TEXT, Required): The unique identifier of the task list
* `taskId` (TEXT, Required): The unique identifier of the task
**Output:** Returns task details including ID, title, status, due date, notes, and parent information
***
### Update Task
##### `googletasks.updateTask`
Update an existing task
**Requires Confirmation:** No
**Parameters:**
* `taskListId` (TEXT, Required): The unique identifier of the task list
* `taskId` (TEXT, Required): The unique identifier of the task to update
* `title` (TEXT, Optional): The new title of the task
* `notes` (TEXT, Optional): Updated notes for the task
* `status` (TEXT, Optional): Task status: 'needsAction' or 'completed'
* `due` (TEXT, Optional): Due date in RFC3339 format
**Output:** Returns the updated task with new values
***
### Delete Task
##### `googletasks.deleteTask`
Delete a specific task
**Requires Confirmation:** No
**Parameters:**
* `taskListId` (TEXT, Required): The unique identifier of the task list
* `taskId` (TEXT, Required): The unique identifier of the task to delete
**Output:** Returns confirmation of deletion
***
### Move Task
##### `googletasks.moveTask`
Move a task to a different position or create subtasks by setting a parent task
**Requires Confirmation:** No
**Parameters:**
* `taskListId` (TEXT, Required): The unique identifier of the task list
* `taskId` (TEXT, Required): The unique identifier of the task to move
* `parent` (TEXT, Optional): ID of the parent task to make this a subtask
* `previous` (TEXT, Optional): ID of the task that should come before this task
**Output:** Returns the moved task with updated position information
***
### Clear Completed Tasks
##### `googletasks.clearCompletedTasks`
Clear all completed tasks from a task list
**Requires Confirmation:** No
**Parameters:**
* `taskListId` (TEXT, Required): The unique identifier of the task list
**Output:** Returns confirmation of cleared tasks
***
### Delete Completed Tasks
##### `googletasks.deleteCompletedTasks`
Permanently delete all completed tasks from a task list
**Requires Confirmation:** No
**Parameters:**
* `taskListId` (TEXT, Required): The unique identifier of the task list
**Output:** Returns confirmation of deleted tasks
***
## Common Use Cases
Manage and organize your Google Tasks data
Automate workflows with Google Tasks
Generate insights and reports
Connect Google Tasks with other tools
## Best Practices
**Getting Started:**
1. Enable the Google Tasks integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Google Tasks integration, contact [support@langdock.com](mailto:support@langdock.com)
# HubSpot
Source: https://docs.langdock.com/administration/integrations/hubspot
All-in-one platform that integrates marketing, sales, and customer service software
## Overview
All-in-one platform that integrates marketing, sales, and customer service software. Through Langdock's integration, you can access and manage HubSpot directly from your conversations.
**Authentication:** OAuth\
**Category:** CRM & Customer Support\
**Availability:** All workspace plans
## Available Actions
### Create Contact
##### `hubspot.createContact`
Creates a new contact in HubSpot
**Requires Confirmation:** Yes
**Parameters:**
* `additionalProperties` (TEXT, Optional): Any custom properties specific to your HubSpot account
* `firstName` (TEXT, Optional): The contact's first name
* `company` (TEXT, Optional): The name of the company the contact works for
* `zip` (TEXT, Optional): ZIP-Code of the contact
* `jobtitle` (TEXT, Optional): The contact's job title
* `address` (TEXT, Optional): Address of the contact
* `country` (TEXT, Optional): Country of the contact
* `website` (TEXT, Optional): Link to a personal profile of the contact or of the company
* `leadStatus` (TEXT, Optional): The current status of a lead in the sales pipeline
* `email` (TEXT, Required): The contact's email address
* `lastName` (TEXT, Optional): The contact's last name
* `owner` (TEXT, Optional): HubSpot owner ID of the responsible person for that contact
* `phone` (TEXT, Optional): The contact's phone number
* `lifecycleStage` (TEXT, Optional): The lifecycle stage of a contact or company
* `state` (TEXT, Optional): State of the contact (within their country)
* `leadSource` (TEXT, Optional): The source from which the lead originated in HubSpot
* `associations` (TEXT, Optional): Links between this contact and other HubSpot objects
**Output:** Returns the created contact with ID and all specified properties
***
### Create Company
##### `hubspot.createCompany`
Creates a new company in HubSpot
**Requires Confirmation:** Yes
**Parameters:**
* `associations` (TEXT, Optional): Associations to other objects (contacts, meeting, notes, deals etc.)
* `city` (TEXT, Optional): The city where the company's primary office or headquarters is located
* `industry` (TEXT, Optional): The sector or industry the company operates in
* `address` (TEXT, Optional): The street address of the company's primary location
* `name` (TEXT, Required): Company name
* `country` (TEXT, Optional): The country where the company is based
* `domain` (TEXT, Optional): The company's website domain without protocol or paths
* `additionalProperties` (TEXT, Optional): Any custom properties specific to your HubSpot account
* `description` (TEXT, Optional): A brief overview of the company, its mission, products, or services
* `ownerId` (TEXT, Optional): The HubSpot user ID of the person responsible for managing this company record
* `website` (TEXT, Optional): The full URL of the company's website
* `phone` (TEXT, Optional): The main contact phone number for the company
* `state` (TEXT, Optional): The state, province, or region where the company is located
**Output:** Returns the created company with ID and all specified properties
***
### Create Deal
##### `hubspot.createDeal`
Creates a new deal in HubSpot
**Requires Confirmation:** Yes
**Parameters:**
* `associatedObjects` (MULTI\_LINE\_TEXT, Optional): Associations to other objects (contacts, companies, etc.)
* `closeDate` (TEXT, Optional): The expected close date of the deal (Format: ISO 8601)
* `additionalProperties` (TEXT, Optional): Any additional custom properties you want to set
* `stage` (TEXT, Optional): The stage of the deal in the pipeline
* `dealOwner` (TEXT, Optional): User ID of the HubSpot user responsible for this deal
* `amount` (NUMBER, Optional): The monetary value of the deal
* `dealName` (TEXT, Required): The name of the deal/opportunity
* `pipeline` (TEXT, Required): The pipeline the deal belongs to
* `description` (TEXT, Optional): A description of the deal
**Output:** Returns the created deal with ID and all specified properties
***
### Create Note
##### `hubspot.createNote`
Creates a note that an object can be associated with
**Requires Confirmation:** No
**Parameters:**
* `ownerId` (TEXT, Required): Owner ID of the HubSpot user creating the note
* `noteBody` (TEXT, Required): Text for the note
* `attachmentIds` (TEXT, Optional): If you want to put a single or multiple attachments onto the note
* `associations` (TEXT, Required): A comma-separated list of object type and ID pairs
**Output:** Returns the created note with ID and association details
***
### Update Contact
##### `hubspot.updateContact`
Updates a contact in HubSpot
**Requires Confirmation:** No
**Parameters:**
* `properties` (TEXT, Required): Properties to update on the HubSpot contact (JSON string)
* `contactId` (TEXT, Required): ID of the contact that should be updated
**Output:** Returns the updated contact with new property values
***
### Update Company
##### `hubspot.updateCompany`
Updates a company in HubSpot
**Requires Confirmation:** No
**Parameters:**
* `companyId` (TEXT, Optional): ID of the company that should be updated
* `properties` (TEXT, Optional): The properties parameter expects a JSON object containing company property values to update
**Output:** Returns the updated company with new property values
***
### Update Deal
##### `hubspot.updateDeal`
Updates one or more fields from an existing deal
**Requires Confirmation:** No
**Parameters:**
* `properties` (MULTI\_LINE\_TEXT, Required): Properties to update on the HubSpot deal (JSON string)
* `dealId` (TEXT, Required): ID of the deal that should be updated
**Output:** Returns the updated deal with new property values
***
### Get Contact
##### `hubspot.getContact`
Gets a contact by its ID
**Requires Confirmation:** No
**Parameters:**
* `contactId` (TEXT, Required): ID of the contact to get
**Output:** Returns contact details including all properties and associations
***
### Get Contact Engagement
##### `hubspot.getContactEngagement`
Retrieve engagement information like recent activities of a contact
**Requires Confirmation:** No
**Parameters:**
* `id` (TEXT, Required): The ID of the object to retrieve engagement from
**Output:** Returns engagement information including recent activities and interactions
***
### Get Company
##### `hubspot.getCompany`
Gets a company by its ID
**Requires Confirmation:** No
**Parameters:**
* `companyId` (TEXT, Required): ID of the company to get
**Output:** Returns company details including all properties and associations
***
### Get Deal
##### `hubspot.getDeal`
Gets a deal by its ID
**Requires Confirmation:** No
**Parameters:**
* `dealId` (TEXT, Required): ID of the deal to get
**Output:** Returns deal details including all properties and associations
***
### Get Deal Context
##### `hubspot.getDealContext`
Gets all required information on the custom deal object, available pipelines, and more
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns deal context information including available pipelines and custom properties
***
### Find Contact
##### `hubspot.findContact`
Finds a contact by searching
**Requires Confirmation:** No
**Parameters:**
* `limit` (TEXT, Optional): Maximum number of records to return in a single request
* `properties` (TEXT, Optional): Specifies which contact properties to include in the response
* `searchQuery` (TEXT, Optional): A text string for full-text search across all searchable properties
* `filterGroups` (TEXT, Optional): Allows you to create complex filtering logic to narrow down contact search results
* `sorts` (TEXT, Optional): Defines how results should be ordered
**Output:** Returns array of contacts matching the search criteria
***
### Find Company
##### `hubspot.findCompany`
Finds a company by searching
**Requires Confirmation:** No
**Parameters:**
* `searchQuery` (TEXT, Optional): A text string for full-text search across all searchable properties
* `properties` (TEXT, Optional): Specifies which company properties to include in the response
* `sorts` (TEXT, Optional): Defines how company search results should be ordered
* `limit` (NUMBER, Optional): Maximum number of records to return in a single request
* `filterGroups` (TEXT, Optional): Allows you to create complex filtering logic to narrow down company search results
**Output:** Returns array of companies matching the search criteria
***
### Find Deal
##### `hubspot.findDeal`
Finds a deal by searching
**Requires Confirmation:** No
**Parameters:**
* `limit` (NUMBER, Optional): Maximum number of records to return in a single request
* `searchQuery` (TEXT, Optional): A text string for full-text search across all searchable properties
* `filterGroups` (TEXT, Optional): Allows you to create complex filtering logic to narrow down search results
* `properties` (TEXT, Optional): Specifies which deal properties to include in the response
* `sorts` (TEXT, Optional): Defines how results should be ordered
**Output:** Returns array of deals matching the search criteria
***
### Get HubSpot Owners
##### `hubspot.getHubSpotOwners`
Retrieves all HubSpot owners/users with optional filtering by email
**Requires Confirmation:** No
**Parameters:**
* `email` (TEXT, Optional): Search for a specific owner by their email address
* `limit` (NUMBER, Optional): Maximum number of owners to retrieve (default 100, maximum 500)
* `includeInactive` (BOOLEAN, Optional): Whether to include archived/inactive owners in the results
**Output:** Returns array of owners with their IDs, emails, and other details
***
### Get Current User Context
##### `hubspot.getCurrentUserContext`
Gets the current user's email, hubspot\_owner\_id, and other important info
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns current user context including email and owner ID
***
## Common Use Cases
Manage and organize your HubSpot data
Automate workflows with HubSpot
Generate insights and reports
Connect HubSpot with other tools
## Best Practices
**Getting Started:**
1. Enable the HubSpot integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the HubSpot integration, contact [support@langdock.com](mailto:support@langdock.com)
# Jira
Source: https://docs.langdock.com/administration/integrations/jira
Software for bug tracking, issue tracking and agile project management
## Overview
Software for bug tracking, issue tracking and agile project management. Through Langdock's integration, you can access and manage Jira directly from your conversations.
**Authentication:** OAuth\
**Category:** Development & Issue Tracking\
**Availability:** All workspace plans
## Available Actions
### Create Issue
##### `jira.createIssue`
Creates an issue or, where the option to create subtasks is enabled in Jira, a subtask
**Requires Confirmation:** Yes
**Parameters:**
* `parentKey` (TEXT, Optional): The key of the parent issue, e.g. key of an epic
* `assigneeId` (TEXT, Optional): The Account ID of the assignee user
* `description` (MULTI\_LINE\_TEXT, Optional): A description of the issue (plain text or JSON-formatted Jira document)
* `projectKey` (TEXT, Required): The key of the project to assign the newly created issue to
* `summary` (TEXT, Required): A short summary of the issue
* `issueTypeId` (TEXT, Required): The ID of the issue type
* `customFields` (TEXT, Optional): Custom field values when creating a Jira issue (JSON object)
**Output:** Returns the created issue with key, ID, and all specified details
***
### Search for Issues
##### `jira.searchforIssues`
Searches for issues using JQL
**Requires Confirmation:** No
**Parameters:**
* `jql` (TEXT, Optional): A JQL expression for performance reasons, this parameter requires a bounded query
* `fields` (TEXT, Optional): A list of Jira issue fields to include in the response, formatted as a JSON array
**Output:** Returns array of issues matching the JQL query with specified fields
***
### Get Issue Types for Project
##### `jira.getIssueTypesforProject`
Gets all issue types for a project
**Requires Confirmation:** No
**Parameters:**
* `projectId` (TEXT, Required): The ID of the project
**Output:** Returns array of issue types available for the project
***
### Get All Issue Types for User
##### `jira.getAllIssueTypesforUser`
Gets all issue types for a user
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns array of all issue types available to the user
***
### Get Recent Projects
##### `jira.getRecentProjects`
Returns a list of up to 20 projects recently viewed by the user
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns array of recent projects with their details
***
### Find Users
##### `jira.findUsers`
Returns a list of active users that match the search string and property
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Optional): A query string that is matched against user attributes
**Output:** Returns array of users matching the search criteria
***
### Get Current User
##### `jira.getCurrentUser`
Gets details about the user of the current connection
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns current user details including ID, email, and display name
***
### Get Issue
##### `jira.getIssue`
Gets an issue by ID or key
**Requires Confirmation:** No
**Parameters:**
* `issueId` (TEXT, Required): ID or key of the issue that should be retrieved
**Output:** Returns issue details including all fields and properties
***
### Update Issue
##### `jira.updateIssue`
Updates an issue
**Requires Confirmation:** Yes
**Parameters:**
* `issueId` (TEXT, Required): ID of the issue that should be updated
* `assigneeId` (TEXT, Optional): The Account ID of the assignee user
* `description` (TEXT, Optional): A description of the issue
* `projectKey` (TEXT, Optional): The key of the project to assign the newly created issue to
* `summary` (TEXT, Optional): A short summary of the issue
* `issueTypeId` (TEXT, Optional): The ID of the issue type
* `customFields` (TEXT, Optional): Custom field values when updating a Jira issue
**Output:** Returns the updated issue with new values
***
### Create Subtask
##### `jira.createSubtask`
Creates a subtask for an existing issue
**Requires Confirmation:** Yes
**Parameters:**
* `parrentissueKey` (TEXT, Required): The key of the parent issue
* `summary` (TEXT, Required): The title/summary of the subtask
* `projectKey` (TEXT, Optional): The project key (will be extracted from parent issue if not provided)
* `subtasktypeId` (TEXT, Optional): The ID of the subtask issue type (defaults to '10000')
* `description` (TEXT, Optional): A description of the subtask
* `assigneeId` (TEXT, Optional): The ID of the user to assign the subtask to
* `reporterId` (TEXT, Optional): The ID of the user who is reporting the subtask
* `priorityId` (TEXT, Optional): The ID of the priority level for the subtask
* `customFields` (TEXT, Optional): Custom field values when creating a Jira subtask
**Output:** Returns the created subtask with key, ID, and parent relationship
***
### Move Issue by Transition ID
##### `jira.moveIssuebyTransitionID`
Moves an issue through workflow stages using a transition ID
**Requires Confirmation:** Yes
**Parameters:**
* `issueId` (TEXT, Required): ID or key of the issue that should be moved
* `transitionId` (TEXT, Required): ID of the transition to perform
* `comment` (TEXT, Optional): A comment to add during the transition
**Output:** Returns confirmation of the transition
***
### Get Transition ID
##### `jira.getTransitionID`
Gets available transition IDs for a Jira issue
**Requires Confirmation:** No
**Parameters:**
* `issueId` (TEXT, Required): ID or key of the issue for which the transitions should be received
**Output:** Returns array of available transitions with their IDs and names
***
### Get Field Metadata for Issue Type
##### `jira.getFieldMetadataforIssueType`
Gets all available field metadata for an issue type
**Requires Confirmation:** No
**Parameters:**
* `projectId` (TEXT, Required): ID or key of the project the issue metadata should be received for
* `issueTypeId` (TEXT, Required): ID of the issue type the data should be retrieved for
**Output:** Returns field metadata including custom fields and their configurations
***
### Get Project
##### `jira.getProject`
Returns the project details for a project
**Requires Confirmation:** No
**Parameters:**
* `projectIdOrKey` (TEXT, Required): The project ID or project key
* `expand` (TEXT, Optional): Use expand to include additional information in the response
* `properties` (TEXT, Optional): A list of project properties to return for the project
**Output:** Returns project details including name, key, description, and other properties
***
### Get Comments
##### `jira.getComments`
Returns all comments for an issue
**Requires Confirmation:** No
**Parameters:**
* `issueIdOrKey` (TEXT, Required): The ID or key of the issue
* `startAt` (TEXT, Optional): The index of the first item to return in a page of results
* `maxResults` (TEXT, Optional): The maximum number of items to return per page
* `orderBy` (TEXT, Optional): Order the results by a field
* `expand` (TEXT, Optional): Use expand to include additional information about comments
**Output:** Returns array of comments with their content, author, and timestamps
***
### Add Comment
##### `jira.addComment`
Adds a comment to an issue
**Requires Confirmation:** Yes
**Parameters:**
* `issueIdOrKey` (TEXT, Required): The ID or key of the issue
* `comment` (MULTI\_LINE\_TEXT, Required): The text content of the comment
* `visibilityType` (TEXT, Optional): Whether the comment visibility is restricted by group or project role
* `visibilityValue` (TEXT, Optional): The name of the group or the name of the project role
* `visibilityIdentifier` (TEXT, Optional): The ID of the group or the name of the project role
* `expand` (TEXT, Optional): Use expand to include additional information about comments
* `properties` (TEXT, Optional): A list of comment properties as JSON
**Output:** Returns the created comment with ID and content
***
### Update Comment
##### `jira.updateComment`
Updates a comment
**Requires Confirmation:** Yes
**Parameters:**
* `issueIdOrKey` (TEXT, Required): The ID or key of the issue
* `commentId` (TEXT, Required): The ID of the comment
* `comment` (MULTI\_LINE\_TEXT, Required): The text content of the comment
* `visibilityType` (TEXT, Optional): Whether the comment visibility is restricted by group or project role
* `visibilityValue` (TEXT, Optional): The name of the group or the name of the project role
* `visibilityIdentifier` (TEXT, Optional): The ID of the group or the name of the project role
* `notifyUsers` (TEXT, Optional): Whether users are notified by email
* `overrideEditableFlag` (TEXT, Optional): Whether screen security is overridden
* `expand` (TEXT, Optional): Use expand to include additional information about comments
* `properties` (TEXT, Optional): A list of comment properties as JSON
**Output:** Returns the updated comment with new content
***
### Get Project Stages
##### `jira.getProjectStages`
Gets the stages for issue types of a project
**Requires Confirmation:** No
**Parameters:**
* `projectId` (TEXT, Required): ID or key of the project the stages should be retrieved for
**Output:** Returns array of project stages with their details
#### Triggers
***
### Updated Issue
##### `jira.updatedIssue`
Triggers when an issue was updated
**Requires Confirmation:** No
**Parameters:**
* `projectKey` (TEXT, Optional): The key of the project that should trigger the workflow
* `issueType` (TEXT, Optional): The type of issue that should trigger the workflow
**Output:** Returns the result of the operation
***
### New Issue
##### `jira.newIssue`
Triggers when new issues are created
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns the result of the operation
***
### New Issue (JQL)
##### `jira.newIssueJQL`
Triggers when new issues are created
**Requires Confirmation:** No
**Parameters:**
* `jqlQuery` (TEXT, Required): JQL query to filter issues
**Output:** Returns the result of the operation
***
### Create Issue
##### `jira.createIssue`
Creates an issue in Linear
**Requires Confirmation:** Yes
**Parameters:**
* `teamId` (TEXT, Required): The ID of the team to create the issue in. Team id's can be retrieved using the 'Get teams' action
* `title` (TEXT, Required): The title of the issue
* `description` (TEXT, Optional): The description of the issue in markdown format
* `options` (OBJECT, Optional): Provide additional Linear issue properties as a JSON object. This field accepts any valid Linear issue fields beyond the basic ones above
**Output:** Returns the created issue with ID, title, identifier, URL, priority, state, assignee, and labels
***
### Create Comment
##### `jira.createComment`
Creates a new issue comment in Linear
**Requires Confirmation:** Yes
**Parameters:**
* `issueId` (TEXT, Optional): The ID of the issue to comment on
* `commentBody` (TEXT, Required): The body of the comment to add
**Output:** Returns the created comment with ID and content
***
### Update an Issue
##### `jira.updateanIssue`
Updates an existing issue in Linear
**Requires Confirmation:** Yes
**Parameters:**
* `title` (TEXT, Optional): The new title of the issue
* `assigneeId` (TEXT, Optional): The ID of the person assigned to the task
* `stateId` (TEXT, Optional): The state of the issue. Can be Backlog, Todo, To discuss, In Progress, Blocked, In Review, Waiting for Release, Done, Canceled, Triage
* `issueId` (TEXT, Required): The ID of the issue that should be updated
**Output:** Returns the updated issue with new values
***
### Get Issue Details
##### `jira.getIssueDetails`
Gets the details of a specific issue
**Requires Confirmation:** No
**Parameters:**
* `issueId` (TEXT, Required): The ID of the issue
**Output:** Returns issue details including ID, title, description, state, assignee, labels, and other properties
***
### Get Team Members
##### `jira.getTeamMembers`
Gets all team members in a given Linear team
**Requires Confirmation:** No
**Parameters:**
* `teamId` (TEXT, Required): The ID of the team
**Output:** Returns array of team members with their IDs, names, and other details
***
### Get Current User
##### `jira.getCurrentUser`
Get the user details of your profile in Linear
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns current user details including ID, name, email, and other profile information
***
### Get Teams
##### `jira.getTeams`
Lists all teams in Linear workspace
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns array of teams with their IDs, names, and other details
***
### Get Issues in Team
##### `jira.getIssuesinTeam`
Gets all issues from a given team
**Requires Confirmation:** No
**Parameters:**
* `teamId` (TEXT, Required): The team ID to use for searching issues
**Output:** Returns array of issues in the specified team
***
### Search Issues
##### `jira.searchIssues`
Searches for issues in Linear
**Requires Confirmation:** No
**Parameters:**
* `teamId` (TEXT, Optional): The ID of the team to search issues in. Leave empty to search across all teams
* `query` (TEXT, Optional): Text to search for in issue titles and descriptions
* `status` (TEXT, Optional): Filter by issue status (e.g., 'backlog', 'in\_progress', 'done')
* `assigneeId` (TEXT, Optional): Filter issues by assignee ID
* `limit` (TEXT, Optional): Maximum number of issues to return (default: 50, max: 100)
**Output:** Returns array of issues matching the search criteria
***
## Common Use Cases
Manage and organize your Jira data
Automate workflows with Jira
Generate insights and reports
Connect Jira with other tools
## Best Practices
**Getting Started:**
1. Enable the Jira integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Jira integration, contact [support@langdock.com](mailto:support@langdock.com)
# Linear
Source: https://docs.langdock.com/administration/integrations/linear
A project management tool for software teams that streamlines issue tracking
## Overview
A project management tool for software teams that streamlines issue tracking. Through Langdock's integration, you can access and manage Linear directly from your conversations.
**Authentication:** OAuth\
**Category:** Development & Issue Tracking\
**Availability:** All workspace plans
## Available Actions
### Create Issue
##### `linear.createIssue`
Creates an issue in Linear
**Requires Confirmation:** Yes
**Parameters:**
* `teamId` (TEXT, Required): The ID of the team to create the issue in. Team id's can be retrieved using the 'Get teams' action
* `title` (TEXT, Required): The title of the issue
* `description` (TEXT, Optional): The description of the issue in markdown format
* `options` (OBJECT, Optional): Provide additional Linear issue properties as a JSON object. This field accepts any valid Linear issue fields beyond the basic ones above
**Output:** Returns the created issue with ID, title, identifier, URL, priority, state, assignee, and labels
***
### Create Comment
##### `linear.createComment`
Creates a new issue comment in Linear
**Requires Confirmation:** Yes
**Parameters:**
* `issueId` (TEXT, Optional): The ID of the issue to comment on
* `commentBody` (TEXT, Required): The body of the comment to add
**Output:** Returns the created comment with ID and content
***
### Update an Issue
##### `linear.updateanIssue`
Updates an existing issue in Linear
**Requires Confirmation:** Yes
**Parameters:**
* `title` (TEXT, Optional): The new title of the issue
* `assigneeId` (TEXT, Optional): The ID of the person assigned to the task
* `stateId` (TEXT, Optional): The state of the issue. Can be Backlog, Todo, To discuss, In Progress, Blocked, In Review, Waiting for Release, Done, Canceled, Triage
* `issueId` (TEXT, Required): The ID of the issue that should be updated
**Output:** Returns the updated issue with new values
***
### Get Issue Details
##### `linear.getIssueDetails`
Gets the details of a specific issue
**Requires Confirmation:** No
**Parameters:**
* `issueId` (TEXT, Required): The ID of the issue
**Output:** Returns issue details including ID, title, description, state, assignee, labels, and other properties
***
### Get Team Members
##### `linear.getTeamMembers`
Gets all team members in a given Linear team
**Requires Confirmation:** No
**Parameters:**
* `teamId` (TEXT, Required): The ID of the team
**Output:** Returns array of team members with their IDs, names, and other details
***
### Get Current User
##### `linear.getCurrentUser`
Get the user details of your profile in Linear
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns current user details including ID, name, email, and other profile information
***
### Get Teams
##### `linear.getTeams`
Lists all teams in Linear workspace
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns array of teams with their IDs, names, and other details
***
### Get Issues in Team
##### `linear.getIssuesinTeam`
Gets all issues from a given team
**Requires Confirmation:** No
**Parameters:**
* `teamId` (TEXT, Required): The team ID to use for searching issues
**Output:** Returns array of issues in the specified team
***
### Search Issues
##### `linear.searchIssues`
Searches for issues in Linear
**Requires Confirmation:** No
**Parameters:**
* `teamId` (TEXT, Optional): The ID of the team to search issues in. Leave empty to search across all teams
* `query` (TEXT, Optional): Text to search for in issue titles and descriptions
* `status` (TEXT, Optional): Filter by issue status (e.g., 'backlog', 'in\_progress', 'done')
* `assigneeId` (TEXT, Optional): Filter issues by assignee ID
* `limit` (TEXT, Optional): Maximum number of issues to return (default: 50, max: 100)
**Output:** Returns array of issues matching the search criteria
***
## Common Use Cases
Manage and organize your Linear data
Automate workflows with Linear
Generate insights and reports
Connect Linear with other tools
## Best Practices
**Getting Started:**
1. Enable the Linear integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Linear integration, contact [support@langdock.com](mailto:support@langdock.com)
# Microsoft Teams
Source: https://docs.langdock.com/administration/integrations/microsoft-teams
Platform that combines chat, video meetings, file storage, and app integration
## Overview
Platform that combines chat, video meetings, file storage, and app integration. Through Langdock's integration, you can access and manage Microsoft Teams directly from your conversations.
**Authentication:** OAuth\
**Category:** Microsoft 365\
**Availability:** All workspace plans
## Available Actions
### Send Channel Message
##### `microsoftteams.sendChannelMessage`
Sends a message in a Teams channel on the user's behalf
**Requires Confirmation:** Yes
**Parameters:**
* `content` (TEXT, Required): The message you want to send
* `channelId` (TEXT, Required): The unique identifier of the channel you want to send the message in
* `teamId` (TEXT, Required): The unique identifier of the Team you want to send the message in
**Output:** Returns the sent message with ID and content
***
### List All Chats
##### `microsoftteams.listAllChats`
Lists all chats from a specific team the user is part of
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns array of chats with their IDs, names, and other details
***
### List All Users in Chat
##### `microsoftteams.listAllUsersinChat`
Lists all users in a specific chat
**Requires Confirmation:** No
**Parameters:**
* `chatId` (TEXT, Required): The unique ID of the chat you want to retrieve the user from
**Output:** Returns array of users in the specified chat
***
### Send Chat Message
##### `microsoftteams.sendChatMessage`
Sends a message in a Teams chat on the user's behalf
**Requires Confirmation:** Yes
**Parameters:**
* `chatId` (TEXT, Required): The unique identifier of the chat you want to send the message in
* `content` (TEXT, Required): The message you want to send
**Output:** Returns the sent message with ID and content
***
### List All Teams
##### `microsoftteams.listAllTeams`
Lists all teams the user is part of
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns array of teams with their IDs, names, and other details
***
### List All Channels
##### `microsoftteams.listAllChannels`
Lists all channels from a specific team the user is part of
**Requires Confirmation:** No
**Parameters:**
* `teamId` (TEXT, Required): The ID of the team you want all channels listed for
**Output:** Returns array of channels with their IDs, names, and other details
***
### Search Messages
##### `microsoftteams.searchMessages`
Searches for chat & channel messages in Microsoft Teams that match the specified query
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Required): Search term used to find messages in Microsoft Teams
**Output:** Returns array of messages matching the search query
***
### Find Chat
##### `microsoftteams.findChat`
Allows you to find a Microsoft Teams chat by its name
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Required): Search term used to filter Microsoft Teams chats by their topic names
**Output:** Returns array of chats matching the search criteria
#### Triggers
***
### New Channel Message
##### `microsoftteams.newChannelMessage`
Triggers when a new message to a channel is sent
**Requires Confirmation:** No
**Parameters:**
* `channelId` (TEXT, Required): The ID of the channel you want to monitor for new messages
* `keyword` (TEXT, Optional): Keywords that should be included in the message to trigger
* `teamId` (TEXT, Required): The unique ID of the team the channel you want to monitor belongs to
**Output:** Returns the result of the operation
***
### New Chat Message
##### `microsoftteams.newChatMessage`
Triggers when you receive a new chat message
**Requires Confirmation:** No
**Parameters:**
* `chatId` (TEXT, Required): The ID of a chat that you want to monitor
* `keyword` (TEXT, Optional): Keyword or multiple keywords you would like to filter for
**Output:** Returns the result of the operation
***
### New Channel Mention
##### `microsoftteams.newChannelMention`
Triggers when you are mentioned in a channel
**Requires Confirmation:** No
**Parameters:**
* `teamId` (TEXT, Optional): The ID of the team that should be monitored for mentions in channels
* `channelId` (TEXT, Optional): The ID of the channel that should be monitored for mentions in channels
* `keyword` (TEXT, Optional): To filter messages containing a specific keyword
* `mentionType` (TEXT, Optional): Type of mention to filter for (can be 'channel', 'team', or 'person')
**Output:** Returns the result of the operation
***
## Common Use Cases
Manage and organize your Microsoft Teams data
Automate workflows with Microsoft Teams
Generate insights and reports
Connect Microsoft Teams with other tools
## Best Practices
**Getting Started:**
1. **Prerequisite:** A Microsoft Admin must [approve the Langdock application](/administration/microsoft-admin-approval) in your Microsoft workspace once.
2. Enable the Microsoft Teams integration in your workspace settings
3. Authenticate using OAuth
4. Test the connection with a simple read operation
5. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Microsoft Teams integration, contact [support@langdock.com](mailto:support@langdock.com)
# Milvus
Source: https://docs.langdock.com/administration/integrations/milvus
High-performance vector database for AI applications and similarity search
## Overview
High-performance vector database for AI applications and similarity search. Through Langdock's integration, you can access and manage Milvus directly from your conversations.
**Authentication:** API Key\
**Category:** AI & Search\
**Availability:** All workspace plans
## Available Actions
### Search Collection
##### `milvus.searchCollection`
Searches the database for the most relevant information based on the query provided
**Requires Confirmation:** No
**Parameters:**
* `query` (VECTOR, Required): The query vector for similarity search
**Output:** Returns array of similar vectors with their IDs, scores, and metadata
***
## Common Use Cases
Manage and organize your Milvus data
Automate workflows with Milvus
Generate insights and reports
Connect Milvus with other tools
## Best Practices
**Getting Started:**
1. Enable the Milvus integration in your workspace settings
2. Authenticate using API Key
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your API Key credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Milvus integration, contact [support@langdock.com](mailto:support@langdock.com)
# Miro
Source: https://docs.langdock.com/administration/integrations/miro
Visual workspace for innovation that enables distributed teams to collaborate
## Overview
Visual workspace for innovation that enables distributed teams to collaborate. Through Langdock's integration, you can access and manage Miro directly from your conversations.
**Authentication:** OAuth\
**Category:** Productivity & Collaboration\
**Availability:** All workspace plans
## Available Actions
### Get Boards
##### `miro.getBoards`
Retrieves a list of boards accessible to the user
**Requires Confirmation:** No
**Parameters:**
* `teamId` (TEXT, Optional): Optional team ID to filter boards
**Output:** Returns array of boards with their IDs, names, descriptions, and other details
***
### Search Boards
##### `miro.searchBoards`
Search for boards by name or description
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Required): Search term to find boards
* `teamId` (TEXT, Optional): Optional team ID to filter search results
* `limit` (NUMBER, Optional): Maximum number of boards to return (default: 10)
**Output:** Returns array of boards matching the search criteria
***
### Get Recent Boards
##### `miro.getRecentBoards`
Get recently modified boards, sorted by modification date
**Requires Confirmation:** No
**Parameters:**
* `limit` (NUMBER, Optional): Maximum number of boards to return (default: 10)
* `teamId` (TEXT, Optional): Optional team ID to filter boards
**Output:** Returns array of recently modified boards
***
### Get Board Details
##### `miro.getBoardDetails`
Retrieves detailed information about a specific board
**Requires Confirmation:** No
**Parameters:**
* `boardId` (TEXT, Required): The unique identifier of the board
**Output:** Returns board details including ID, name, description, and other properties
***
### Get Board Items
##### `miro.getBoardItems`
Retrieves all items on a board with optional filtering by type
**Requires Confirmation:** No
**Parameters:**
* `boardId` (TEXT, Required): The unique identifier of the board
* `itemType` (TEXT, Optional): Filter by item type (e.g., sticky\_note, text, shape, frame, card)
**Output:** Returns array of board items with their details
***
### Get Frames
##### `miro.getFrames`
Retrieves all frames on a board
**Requires Confirmation:** No
**Parameters:**
* `boardId` (TEXT, Required): The unique identifier of the board
**Output:** Returns array of frames with their details
***
### Get Frame Items
##### `miro.getFrameItems`
Retrieves all items within a specific frame
**Requires Confirmation:** No
**Parameters:**
* `boardId` (TEXT, Required): The unique identifier of the board
* `frameId` (TEXT, Required): The unique identifier of the frame
**Output:** Returns array of items within the specified frame
***
### Get Board Tags
##### `miro.getBoardTags`
Retrieves all tags used on a board for organization and categorization
**Requires Confirmation:** No
**Parameters:**
* `boardId` (TEXT, Required): The unique identifier of the board
**Output:** Returns array of tags with their details
***
### Get Specific Item
##### `miro.getSpecificItem`
Retrieves a specific item on a board by its ID
**Requires Confirmation:** No
**Parameters:**
* `boardId` (TEXT, Required): The ID of the board containing the item
* `itemId` (TEXT, Required): The ID of the specific item to retrieve
**Output:** Returns the specific item with its details
***
### Get Frame
##### `miro.getFrame`
Retrieves a specific frame item by its ID
**Requires Confirmation:** No
**Parameters:**
* `boardId` (TEXT, Required): The ID of the board containing the frame
* `frameId` (TEXT, Required): The ID of the frame to retrieve
**Output:** Returns the specific frame with its details
***
### Get Card Item
##### `miro.getCardItem`
Retrieves a specific card item by its ID
**Requires Confirmation:** No
**Parameters:**
* `boardId` (TEXT, Required): The ID of the board containing the card
* `cardId` (TEXT, Required): The ID of the card to retrieve
**Output:** Returns the specific card with its details
***
### Get Document Item
##### `miro.getDocumentItem`
Retrieves a specific document item by its ID
**Requires Confirmation:** No
**Parameters:**
* `boardId` (TEXT, Required): The ID of the board containing the document
* `documentId` (TEXT, Required): The ID of the document to retrieve
**Output:** Returns the specific document with its details
***
### Get Embed Item
##### `miro.getEmbedItem`
Retrieves a specific embed item by its ID
**Requires Confirmation:** No
**Parameters:**
* `boardId` (TEXT, Required): The ID of the board containing the embed
* `itemId` (TEXT, Required): The ID of the embed to retrieve
**Output:** Returns the specific embed with its details
***
### Get Shape Item
##### `miro.getShapeItem`
Retrieves a specific shape item by its ID
**Requires Confirmation:** No
**Parameters:**
* `boardId` (TEXT, Required): The ID of the board containing the shape
* `itemId` (TEXT, Required): The ID of the shape to retrieve
**Output:** Returns the specific shape with its details
***
### Get Sticky Note Item
##### `miro.getStickyNoteItem`
Retrieves a specific sticky note item by its ID
**Requires Confirmation:** No
**Parameters:**
* `boardId` (TEXT, Required): The ID of the board containing the sticky note
* `itemId` (TEXT, Required): The ID of the sticky note to retrieve
**Output:** Returns the specific sticky note with its details
***
### Get Text Item
##### `miro.getTextItem`
Retrieves a specific text item by its ID
**Requires Confirmation:** No
**Parameters:**
* `boardId` (TEXT, Required): The ID of the board containing the text
* `itemId` (TEXT, Required): The ID of the text item to retrieve
**Output:** Returns the specific text item with its details
***
### Get Page Content
##### `miro.getPageContent`
Retrieves the content of a specific page or block and all its children from Notion and converts them to markdown
**Requires Confirmation:** No
**Parameters:**
* `blockId` (TEXT, Required): The unique identifier of the Notion page or block whose content you want to retrieve, including all its nested children blocks and their formatted content
**Output:** Returns the page content converted to markdown format
***
### Query Database
##### `miro.queryDatabase`
Returns pages from a database with optional filters, sorts, and pagination. Use this action whenever you want to fetch multiple pages from a database
**Requires Confirmation:** No
**Parameters:**
* `databaseId` (TEXT, Required): ID or URL of the database to query
* `filter` (TEXT, Optional): Notion filter object JSON. Supports 'and'/'or' compound filters and all type-specific conditions
* `sorts` (OBJECT, Optional): Array of Notion sort objects. Example: \[property':'Last ordered','direction':'ascending]
* `pageSize` (NUMBER, Optional): Number of results per page (max 100). Defaults to 30
* `startCursor` (TEXT, Optional): Cursor from a previous response for pagination
* `filterProperties` (TEXT, Optional): Comma-separated property IDs to include in the response
* `returnAll` (BOOLEAN, Optional): If true, paginates until all results are collected
* `simplifyOutput` (BOOLEAN, Optional): Return simplified pages with id, url, title and flattened properties
**Output:** Returns array of database pages with their properties and content
***
### Create Database
##### `miro.createDatabase`
Creates a database as a subpage in the specified parent page, with the specified properties schema. Requires parent page to be an actual page or Wiki
**Requires Confirmation:** Yes
**Parameters:**
* `parentId` (TEXT, Required): ID or URL of the parent PAGE (or wiki) under which the database will be created
* `title` (TEXT, Optional): Optional database title
* `properties` (OBJECT, Required): Property schema object. Example: Name': title': }, 'Status': status': }, 'Price': number': format': 'dollar}}
* `icon` (TEXT, Optional): Emoji or full icon object
* `cover` (TEXT, Optional): External cover URL or full external file object
* `isInline` (BOOLEAN, Optional): Create the database inline on the page
**Output:** Returns the created database with ID, title, and properties schema
***
### Update Database
##### `miro.updateDatabase`
Updates database metadata (title, description, icon, cover) and/or modifies database properties (add, remove, rename, or change schema)
**Requires Confirmation:** Yes
**Parameters:**
* `databaseId` (TEXT, Required): ID or URL of the database to update
* `title` (TEXT, Optional): Optional new database title
* `description` (TEXT, Optional): Optional new database description
* `properties` (OBJECT, Optional): JSON object describing property changes. Use null to remove a property, provide `{ name: 'New name' }` to rename, or pass a property schema object to change type/options
* `icon` (TEXT, Optional): Emoji or full icon object
* `cover` (TEXT, Optional): External cover URL or full external file object
**Output:** Returns the updated database with new metadata and properties
***
### Get Page Details
##### `miro.getPageDetails`
Retrieves detailed information about a specific Notion page including its properties, metadata, and structure
**Requires Confirmation:** No
**Parameters:**
* `pageId` (TEXT, Required): The unique identifier of the Notion page you want to retrieve information about
**Output:** Returns page details including ID, title, properties, and metadata
***
### Find Pages
##### `miro.findPages`
Searches for pages in your Notion workspace by title
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Optional): Search term used to find pages by their titles
**Output:** Returns array of pages matching the search criteria
***
### Find Databases
##### `miro.findDatabases`
Searches for databases in your Notion workspace by title
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Optional): Search term used to find databases by their titles
**Output:** Returns array of databases matching the search criteria
***
### Get Database Details
##### `miro.getDatabaseDetails`
Retrieves detailed information about a specific Notion database including its properties, metadata, and structure
**Requires Confirmation:** No
**Parameters:**
* `databaseId` (TEXT, Required): The unique identifier of the Notion database you want to retrieve information about
**Output:** Returns database details including ID, title, properties schema, and metadata
***
### Search
##### `miro.search`
Searches across your entire Notion workspace or within a specific database for pages and content
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Optional): Search term to find pages or databases. Searches are case-insensitive and match partial words
* `databaseId` (TEXT, Optional): Optional: Search within a specific database instead of the entire workspace
* `objectType` (SELECT, Optional): Filter results by type. Choose 'page' for pages only or 'database' for databases only
* `propertyFilters` (TEXT, Optional): Filter database pages by property values (requires database ID). Provide as JSON object
* `pageSize` (NUMBER, Optional): Number of results to return per page. Default is 30, maximum is 100
* `sortBy` (SELECT, Optional): Sort results by creation time or last edited time
* `sortDirection` (SELECT, Optional): Sort order for results
* `createdBy` (TEXT, Optional): Filter results by the user who created the page or database
* `lastEditedBy` (TEXT, Optional): Filter results by the user who last edited the page or database
* `startCursor` (TEXT, Optional): Pagination cursor from previous search results
**Output:** Returns array of pages and databases matching the search criteria
***
### Create Page
##### `miro.createPage`
Creates a new page in Notion, either as a database entry or as a child of another page
**Requires Confirmation:** Yes
**Parameters:**
* `parentId` (TEXT, Required): The ID of the parent database or page where the new page will be created
* `parentType` (SELECT, Optional): Type of parent where the page will be created
* `title` (TEXT, Optional): The title of the new page
* `properties` (OBJECT, Optional): Properties for the new page as a JSON object
* `content` (TEXT, Optional): The content of the page. Can be plain text (will be converted to paragraphs) or an array of Notion blocks
* `icon` (TEXT, Optional): An emoji or URL for the page icon
* `cover` (TEXT, Optional): URL of an image to use as the page cover
* `createInPersonalRoot` (BOOLEAN, Optional): When enabled and parentId is 'workspace' or 'root', creates the page at your personal workspace root
**Output:** Returns the created page with ID, title, and properties
***
### Update Page
##### `miro.updatePage`
Updates a page's properties and/or a specific block on that page. Use page fields for database/page metadata; use block fields to edit the content of an individual block
**Requires Confirmation:** Yes
**Parameters:**
* `pageId` (TEXT, Optional): ID of the page to update (properties, icon, cover, trash)
* `properties` (TEXT, Optional): JSON object of properties to update
* `icon` (TEXT, Optional): Emoji character or full Notion icon object
* `cover` (TEXT, Optional): URL string or full Notion external file object
* `inTrash` (BOOLEAN, Optional): Set true to move the page to trash, false to restore
* `blockId` (TEXT, Optional): ID of the block to update (content editing)
* `blockType` (TEXT, Optional): Block type to update (e.g., 'paragraph', 'heading\_1', 'heading\_2', 'heading\_3', 'to\_do', 'bulleted\_list\_item', 'numbered\_list\_item')
* `blockText` (TEXT, Optional): Text content for the block (converted to rich\_text)
* `blockChecked` (BOOLEAN, Optional): Only for to\_do blocks. true/false
* `blockPayload` (TEXT, Optional): Advanced: full JSON body for the block update (overrides blockType/blockText)
* `blockArchived` (BOOLEAN, Optional): Set true to archive the block, false to unarchive
**Output:** Returns the updated page or block with new values
#### Triggers
***
### Updated Page
##### `miro.updatedPage`
Triggers when pages are updated
**Requires Confirmation:** No
**Parameters:**
* `pageId` (TEXT, Optional): ID of the page to monitor for updates
**Output:** Returns the result of the operation
***
### Updated Database Item
##### `miro.updatedDatabaseItem`
Triggers when items in the database are updated
**Requires Confirmation:** No
**Parameters:**
* `databaseId` (TEXT, Required): ID of the database to monitor for updated items
**Output:** Returns the result of the operation
***
### New Database Item
##### `miro.newDatabaseItem`
Triggers when new database items are added
**Requires Confirmation:** No
**Parameters:**
* `databaseId` (TEXT, Required): ID of the database to monitor for new items
**Output:** Returns the result of the operation
***
## Common Use Cases
Manage and organize your Miro data
Automate workflows with Miro
Generate insights and reports
Connect Miro with other tools
## Best Practices
**Getting Started:**
1. Enable the Miro integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Miro integration, contact [support@langdock.com](mailto:support@langdock.com)
# Monday.com
Source: https://docs.langdock.com/administration/integrations/monday
Work operating system that unifies project management, task tracking, and team collaboration
## Overview
Work operating system that unifies project management, task tracking, and team collaboration. Through Langdock's integration, you can access and manage Monday.com directly from your conversations.
**Authentication:** API Key\
**Category:** Productivity & Collaboration\
**Availability:** All workspace plans
## Available Actions
### Get Boards
##### `mondaycom.getBoards`
Returns the available boards and their information
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns a list of boards with their IDs and names
***
### Get Items
##### `mondaycom.getItems`
Retrieves items available on a specific board
**Requires Confirmation:** No
**Parameters:**
* `boardId` (TEXT, Required): The unique identifier for the Monday.com board from which to retrieve items
* `columnIds` (TEXT, Optional): An array of column IDs that specifies which columns' values should be retrieved for each item
* `itemLimit` (NUMBER, Optional): An integer specifying the maximum number of items to retrieve from the board
**Output:** Returns items from the specified board with their column values
***
### Get Item Updates
##### `mondaycom.getItemUpdates`
Retrieves updates (comments) for a specific item
**Requires Confirmation:** No
**Parameters:**
* `itemId` (TEXT, Required): The unique identifier for the item
* `limit` (NUMBER, Optional): Maximum number of updates to retrieve (default: 10)
**Output:** Returns updates/comments for the specified item
***
### Create Column
##### `mondaycom.createColumn`
Creates a new column on a specific board
**Requires Confirmation:** Yes
**Parameters:**
* `boardId` (TEXT, Required): The unique identifier for the Monday.com board
* `title` (TEXT, Required): The title of the new column
* `columnType` (TEXT, Required): The type of column to create (e.g., text, number, status)
* `columnSettings` (MULTI\_LINE\_TEXT, Optional): JSON formatted settings for the column
**Output:** Returns the created column details
***
### Create Subitem
##### `mondaycom.createSubitem`
Creates a new subitem under a parent item
**Requires Confirmation:** Yes
**Parameters:**
* `parentItemId` (TEXT, Required): The unique identifier of the parent item
* `itemName` (TEXT, Required): The name of the new subitem
* `columnValues` (TEXT, Optional): JSON formatted column values for the new subitem
**Output:** Returns the created subitem details
***
### Create Task
##### `mondaycom.createTask`
Creates a new item in a Monday.com board with specified column values
**Requires Confirmation:** Yes
**Parameters:**
* `boardId` (NUMBER, Required): The unique identifier of the board where the item will be created
* `itemName` (TEXT, Required): The name of the new item
* `columnValues` (MULTI\_LINE\_TEXT, Required): JSON formatted column values for the new item (e.g., status': 'Done', 'date': '2023-04-15)
* `groupId` (NUMBER, Optional): Optional group ID to place the item in a specific group
**Output:** Returns the created task/item details
***
### Add Update to Item
##### `mondaycom.addUpdatetoItem`
Adds an update (comment) to a specific item
**Requires Confirmation:** Yes
**Parameters:**
* `itemId` (TEXT, Required): The unique identifier for the item
* `updateText` (MULTI\_LINE\_TEXT, Required): The text content of the update/comment
**Output:** Returns the added update details
***
### Update Item
##### `mondaycom.updateItem`
Updates a specific item in a board in Monday.com
**Requires Confirmation:** Yes
**Parameters:**
* `itemId` (TEXT, Required): The unique identifier for the item (task) to be updated
* `boardId` (TEXT, Required): The unique identifier of the board containing the item
* `value` (TEXT, Required): The new value for the column
* `columnId` (TEXT, Required): The ID of the column to update (e.g., 'status', 'text', 'date')
**Output:** Returns the updated item details
***
### Update Item Column Values
##### `mondaycom.updateItemColumnValues`
Updates column values for an existing item in Monday.com
**Requires Confirmation:** Yes
**Parameters:**
* `boardId` (TEXT, Required): The unique identifier of the board containing the item
* `itemId` (TEXT, Required): The unique identifier of the item to update
* `columnId` (TEXT, Required): The ID of the column to update (e.g., 'status', 'text', 'date')
* `columnValue` (TEXT, Required): The new value for the column
**Output:** Returns the updated item details
***
### Move Item to Group
##### `mondaycom.moveItemtoGroup`
Moves an item to a different group within the same board
**Requires Confirmation:** Yes
**Parameters:**
* `boardId` (TEXT, Required): The unique identifier for the Monday.com board
* `itemId` (TEXT, Required): The unique identifier for the item to move
* `groupId` (TEXT, Required): The identifier of the destination group
**Output:** Returns the moved item details
***
## Common Use Cases
Manage and organize your Monday.com data
Automate workflows with Monday.com
Generate insights and reports
Connect Monday.com with other tools
## Best Practices
**Getting Started:**
1. Enable the Monday.com integration in your workspace settings
2. Authenticate using API Key
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your API Key credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Monday.com integration, contact [support@langdock.com](mailto:support@langdock.com)
# Notion
Source: https://docs.langdock.com/administration/integrations/notion
Workspace combining notes, databases, wikis, and project management in one place
## Overview
Workspace combining notes, databases, wikis, and project management in one place. Through Langdock's integration, you can access and manage Notion directly from your conversations.
**Authentication:** OAuth\
**Category:** Productivity & Collaboration\
**Availability:** All workspace plans
## Available Actions
### Get Page Content
##### `notion.getPageContent`
Retrieves the content of a specific page or block and all its children from Notion and converts them to markdown
**Requires Confirmation:** No
**Parameters:**
* `blockId` (TEXT, Required): The unique identifier of the Notion page or block whose content you want to retrieve, including all its nested children blocks and their formatted content
**Output:** Returns the page content converted to markdown format
***
### Query Database
##### `notion.queryDatabase`
Returns pages from a database with optional filters, sorts, and pagination. Use this action whenever you want to fetch multiple pages from a database
**Requires Confirmation:** No
**Parameters:**
* `databaseId` (TEXT, Required): ID or URL of the database to query
* `filter` (TEXT, Optional): Notion filter object JSON. Supports 'and'/'or' compound filters and all type-specific conditions
* `sorts` (OBJECT, Optional): Array of Notion sort objects. Example: \[property':'Last ordered','direction':'ascending]
* `pageSize` (NUMBER, Optional): Number of results per page (max 100). Defaults to 30
* `startCursor` (TEXT, Optional): Cursor from a previous response for pagination
* `filterProperties` (TEXT, Optional): Comma-separated property IDs to include in the response
* `returnAll` (BOOLEAN, Optional): If true, paginates until all results are collected
* `simplifyOutput` (BOOLEAN, Optional): Return simplified pages with id, url, title and flattened properties
**Output:** Returns array of database pages with their properties and content
***
### Create Database
##### `notion.createDatabase`
Creates a database as a subpage in the specified parent page, with the specified properties schema. Requires parent page to be an actual page or Wiki
**Requires Confirmation:** Yes
**Parameters:**
* `parentId` (TEXT, Required): ID or URL of the parent PAGE (or wiki) under which the database will be created
* `title` (TEXT, Optional): Optional database title
* `properties` (OBJECT, Required): Property schema object. Example: Name': title': }, 'Status': status': }, 'Price': number': format': 'dollar}}
* `icon` (TEXT, Optional): Emoji or full icon object
* `cover` (TEXT, Optional): External cover URL or full external file object
* `isInline` (BOOLEAN, Optional): Create the database inline on the page
**Output:** Returns the created database with ID, title, and properties schema
***
### Update Database
##### `notion.updateDatabase`
Updates database metadata (title, description, icon, cover) and/or modifies database properties (add, remove, rename, or change schema)
**Requires Confirmation:** Yes
**Parameters:**
* `databaseId` (TEXT, Required): ID or URL of the database to update
* `title` (TEXT, Optional): Optional new database title
* `description` (TEXT, Optional): Optional new database description
* `properties` (OBJECT, Optional): JSON object describing property changes. Use null to remove a property, provide `{ name: 'New name' }` to rename, or pass a property schema object to change type/options
* `icon` (TEXT, Optional): Emoji or full icon object
* `cover` (TEXT, Optional): External cover URL or full external file object
**Output:** Returns the updated database with new metadata and properties
***
### Get Page Details
##### `notion.getPageDetails`
Retrieves detailed information about a specific Notion page including its properties, metadata, and structure
**Requires Confirmation:** No
**Parameters:**
* `pageId` (TEXT, Required): The unique identifier of the Notion page you want to retrieve information about
**Output:** Returns page details including ID, title, properties, and metadata
***
### Find Pages
##### `notion.findPages`
Searches for pages in your Notion workspace by title
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Optional): Search term used to find pages by their titles
**Output:** Returns array of pages matching the search criteria
***
### Find Databases
##### `notion.findDatabases`
Searches for databases in your Notion workspace by title
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Optional): Search term used to find databases by their titles
**Output:** Returns array of databases matching the search criteria
***
### Get Database Details
##### `notion.getDatabaseDetails`
Retrieves detailed information about a specific Notion database including its properties, metadata, and structure
**Requires Confirmation:** No
**Parameters:**
* `databaseId` (TEXT, Required): The unique identifier of the Notion database you want to retrieve information about
**Output:** Returns database details including ID, title, properties schema, and metadata
***
### Search
##### `notion.search`
Searches across your entire Notion workspace or within a specific database for pages and content
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Optional): Search term to find pages or databases. Searches are case-insensitive and match partial words
* `databaseId` (TEXT, Optional): Optional: Search within a specific database instead of the entire workspace
* `objectType` (SELECT, Optional): Filter results by type. Choose 'page' for pages only or 'database' for databases only
* `propertyFilters` (TEXT, Optional): Filter database pages by property values (requires database ID). Provide as JSON object
* `pageSize` (NUMBER, Optional): Number of results to return per page. Default is 30, maximum is 100
* `sortBy` (SELECT, Optional): Sort results by creation time or last edited time
* `sortDirection` (SELECT, Optional): Sort order for results
* `createdBy` (TEXT, Optional): Filter results by the user who created the page or database
* `lastEditedBy` (TEXT, Optional): Filter results by the user who last edited the page or database
* `startCursor` (TEXT, Optional): Pagination cursor from previous search results
**Output:** Returns array of pages and databases matching the search criteria
***
### Create Page
##### `notion.createPage`
Creates a new page in Notion, either as a database entry or as a child of another page
**Requires Confirmation:** Yes
**Parameters:**
* `parentId` (TEXT, Required): The ID of the parent database or page where the new page will be created
* `parentType` (SELECT, Optional): Type of parent where the page will be created
* `title` (TEXT, Optional): The title of the new page
* `properties` (OBJECT, Optional): Properties for the new page as a JSON object
* `content` (TEXT, Optional): The content of the page. Can be plain text (will be converted to paragraphs) or an array of Notion blocks
* `icon` (TEXT, Optional): An emoji or URL for the page icon
* `cover` (TEXT, Optional): URL of an image to use as the page cover
* `createInPersonalRoot` (BOOLEAN, Optional): When enabled and parentId is 'workspace' or 'root', creates the page at your personal workspace root
**Output:** Returns the created page with ID, title, and properties
***
### Update Page
##### `notion.updatePage`
Updates a page's properties and/or a specific block on that page. Use page fields for database/page metadata; use block fields to edit the content of an individual block
**Requires Confirmation:** Yes
**Parameters:**
* `pageId` (TEXT, Optional): ID of the page to update (properties, icon, cover, trash)
* `properties` (TEXT, Optional): JSON object of properties to update
* `icon` (TEXT, Optional): Emoji character or full Notion icon object
* `cover` (TEXT, Optional): URL string or full Notion external file object
* `inTrash` (BOOLEAN, Optional): Set true to move the page to trash, false to restore
* `blockId` (TEXT, Optional): ID of the block to update (content editing)
* `blockType` (TEXT, Optional): Block type to update (e.g., 'paragraph', 'heading\_1', 'heading\_2', 'heading\_3', 'to\_do', 'bulleted\_list\_item', 'numbered\_list\_item')
* `blockText` (TEXT, Optional): Text content for the block (converted to rich\_text)
* `blockChecked` (BOOLEAN, Optional): Only for to\_do blocks. true/false
* `blockPayload` (TEXT, Optional): Advanced: full JSON body for the block update (overrides blockType/blockText)
* `blockArchived` (BOOLEAN, Optional): Set true to archive the block, false to unarchive
**Output:** Returns the updated page or block with new values
#### Triggers
***
### Updated Page
##### `notion.updatedPage`
Triggers when pages are updated
**Requires Confirmation:** No
**Parameters:**
* `pageId` (TEXT, Optional): ID of the page to monitor for updates
**Output:** Returns the result of the operation
***
### Updated Database Item
##### `notion.updatedDatabaseItem`
Triggers when items in the database are updated
**Requires Confirmation:** No
**Parameters:**
* `databaseId` (TEXT, Required): ID of the database to monitor for updated items
**Output:** Returns the result of the operation
***
### New Database Item
##### `notion.newDatabaseItem`
Triggers when new database items are added
**Requires Confirmation:** No
**Parameters:**
* `databaseId` (TEXT, Required): ID of the database to monitor for new items
**Output:** Returns the result of the operation
***
## Common Use Cases
Manage and organize your Notion data
Automate workflows with Notion
Generate insights and reports
Connect Notion with other tools
## Best Practices
**Getting Started:**
1. Enable the Notion integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Notion integration, contact [support@langdock.com](mailto:support@langdock.com)
# OneDrive
Source: https://docs.langdock.com/administration/integrations/onedrive
Microsoft's cloud storage service for storing and sharing files and folders
## Overview
Microsoft's cloud storage service for storing and sharing files and folders. Through Langdock's integration, you can access and manage OneDrive directly from your conversations.
**Authentication:** OAuth\
**Category:** Microsoft 365\
**Availability:** All workspace plans
## Available Actions
### Search Files
##### `onedrive.searchFiles`
Searches for files in OneDrive by their title
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Optional): The search term to find matching files
**Output:** Returns array of files with their details including URL, documentId, title, mimeType, author, and createdDate
***
### Download File
##### `onedrive.downloadFile`
Downloads a file from OneDrive
**Requires Confirmation:** No
**Parameters:**
* `parent` (TEXT, Required): The parent folder containing the file
* `itemId` (TEXT, Required): The unique identifier of the file to download
**Output:** Returns the file content for download
***
### Search Files
##### `onedrive.searchFiles`
Searches files by name and returns detailed information about each matching file
**Requires Confirmation:** No
**Parameters:**
* `name` (TEXT, Required): The name of the OneDrive item you want to find
**Output:** Returns array of files with detailed information
***
### List Available Drives
##### `onedrive.listAvailableDrives`
Lists all accessible OneDrive locations
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns array of available drives with their details
***
### Download OneDrive File
##### `onedrive.downloadOneDriveFile`
Downloads a file from OneDrive and returns base64 content
**Requires Confirmation:** No
**Parameters:**
* `parent` (OBJECT, Required): Parent object of the OneDrive file (e.g., driveId, userId, groupId, siteId)
* `itemId` (TEXT, Required): The OneDrive item identifier
**Output:** Returns the file content as base64
***
## Common Use Cases
Manage and organize your OneDrive data
Automate workflows with OneDrive
Generate insights and reports
Connect OneDrive with other tools
## Best Practices
**Getting Started:**
1. **Prerequisite:** A Microsoft Admin must [approve the Langdock application](/administration/microsoft-admin-approval) in your Microsoft workspace once.
2. Enable the OneDrive integration in your workspace settings
3. Authenticate using OAuth
4. Test the connection with a simple read operation
5. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the OneDrive integration, contact [support@langdock.com](mailto:support@langdock.com)
# OpenRegister
Source: https://docs.langdock.com/administration/integrations/openregister
Access German commercial register data, shareholders, balance sheets & more
## Overview
Access German commercial register data, shareholders, balance sheets & more. Through Langdock's integration, you can access and manage OpenRegister directly from your conversations.
**Authentication:** API Key\
**Category:** Business & Finance\
**Availability:** All workspace plans
## Available Actions
### Get Company Contact Information
##### `openregister.getCompanyContactInformation`
Retrieve contact information for a company using its unique ID. The response includes details such as email address, phone number, VAT identification number
**Requires Confirmation:** No
**Parameters:**
* `companyId` (TEXT, Required): Unique company identifier. Can be retrieved by using the 'Search Companies' action
**Output:** Returns company contact information including email, phone, and VAT details
***
### Get Company by Website URL
##### `openregister.getCompanybyWebsiteURL`
Find a company using its website URL. The response includes a company ID that you can use with other endpoints to get details like financials, shareholders, and representatives
**Requires Confirmation:** No
**Parameters:**
* `url` (TEXT, Required): The url of the company website
**Output:** Returns company information including company ID for further queries
***
### Get Shareholders
##### `openregister.getShareholders`
This endpoint currently only supports companies with the legal form GmbH
**Requires Confirmation:** No
**Parameters:**
* `companyId` (TEXT, Required): Unique company identifier. Example format: DE-HRB-F1103-267645
**Output:** Returns shareholder information for GmbH companies
***
### Search Companies
##### `openregister.searchCompanies`
Search for companies based on various criteria. You can filter by company name, register number, register type, register court, active status, and legal form. The response provides a list of matching companies
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Required): Text search query to find companies by name. Example: 'Descartes Technologies UG'
**Output:** Returns a list of matching companies with their basic information
***
### Get Company Information
##### `openregister.getCompanyInformation`
Retrieve detailed information about a company using its unique ID. The response includes company registration details, current status, name, address, business purpose, capital, legal representatives
**Requires Confirmation:** No
**Parameters:**
* `companyId` (TEXT, Required): The company id to use
**Output:** Returns detailed company information including registration details, status, and representatives
***
## Common Use Cases
Manage and organize your OpenRegister data
Automate workflows with OpenRegister
Generate insights and reports
Connect OpenRegister with other tools
## Best Practices
**Getting Started:**
1. Enable the OpenRegister integration in your workspace settings
2. Authenticate using API Key
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your API Key credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the OpenRegister integration, contact [support@langdock.com](mailto:support@langdock.com)
# Outlook Calendar
Source: https://docs.langdock.com/administration/integrations/outlook-calendar
Microsoft's calendar application for scheduling and managing events
## Overview
Microsoft's calendar application for scheduling and managing events. Through Langdock's integration, you can access and manage Outlook Calendar directly from your conversations.
**Authentication:** OAuth\
**Category:** Microsoft 365\
**Availability:** All workspace plans
## Available Actions
### Get Today's Events
##### `outlookcalendar.getTodaysEvents`
Retrieve all calendar events for today from your calendar or a shared calendar. Always include the user's timezone
**Requires Confirmation:** No
**Parameters:**
* `timezone` (TEXT, Optional): IANA timezone identifier for proper time display (e.g. 'Europe/Berlin')
* `calendarowner` (TEXT, Optional): Email address to view someone else's calendar events. Leave empty for your own events
**Output:** Returns array of today's events with their details
***
### Get Events for Specific Date
##### `outlookcalendar.getEventsforSpecificDate`
Retrieve all calendar events for a specific date only. Always include the user's timezone
**Requires Confirmation:** No
**Parameters:**
* `date` (TEXT, Required): The specific date to retrieve events for in ISO format
* `calendarowner` (TEXT, Optional): Email address to view someone else's calendar events. Leave empty for your own events
* `timezone` (TEXT, Optional): IANA timezone identifier for proper time display
**Output:** Returns array of events for the specified date
***
### Create Events
##### `outlookcalendar.createEvents`
Creates outlook calendar events
**Requires Confirmation:** Yes
**Parameters:**
* `timezone` (TEXT, Required): The timezone of the outlook calendar of the user
* `calendarId` (TEXT, Optional): The unique ID of the calendar to schedule the event in. Leave empty to use default calendar
* `locationName` (TEXT, Optional): Event location
* `endingTime` (TEXT, Required): When the event ends in ISO 8601 format
* `eventDescription` (TEXT, Optional): Description of what is planned for the event
* `startingTime` (TEXT, Required): When the event starts in ISO 8601 format
* `eventSubject` (TEXT, Required): The subject or title of the event
* `attendees` (TEXT, Optional): Comma- or semicolon-separated list of REQUIRED attendee emails
* `optionalAttendees` (TEXT, Optional): Comma- or semicolon-separated list of OPTIONAL attendee emails
* `isonlinemeeting` (BOOLEAN, Optional): Set to true to create an online meeting with video conferencing details
* `onlinemeetingprovider` (TEXT, Optional): Online meeting provider: 'teamsForBusiness', 'skypeForBusiness', or 'skype'
* `recurrence` (TEXT, Optional): Recurrence pattern as JSON
* `showAs` (SELECT, Optional): Event availability status: 'free', 'busy', 'tentative', 'outOfOffice'
**Output:** Returns the created event with ID and details
***
### Update Event
##### `outlookcalendar.updateEvent`
Update an existing calendar event
**Requires Confirmation:** Yes
**Parameters:**
* `attendees` (TEXT, Optional): Comma- or semicolon-separated REQUIRED attendee emails to add or update to required
* `optionalAttendees` (TEXT, Optional): Comma- or semicolon-separated OPTIONAL attendee emails to add or update to optional
* `removeAttendees` (TEXT, Optional): Comma- or semicolon-separated attendee emails to remove from the event
* `replaceAttendees` (BOOLEAN, Optional): If true, replace the entire attendee list with the attendees and optionalAttendees provided
* `eventSubject` (TEXT, Optional): The subject or title of the event
* `timezone` (TEXT, Optional): The timezone for the event
* `eventId` (TEXT, Required): The unique ID of the event to update
* `eventDescription` (TEXT, Optional): Description of what is planned for the event
* `endingTime` (TEXT, Optional): When the event ends in ISO 8601 format
* `locationName` (TEXT, Optional): Event location
* `startingTime` (TEXT, Optional): When the event starts in ISO 8601 format
* `isonlinemeeting` (BOOLEAN, Optional): Set to true to make this an online meeting with video conferencing details
* `onlinemeetingprovider` (TEXT, Optional): Online meeting provider: 'teamsForBusiness', 'skypeForBusiness', or 'skype'
* `recurrence` (TEXT, Optional): Recurrence pattern as JSON
* `showAs` (SELECT, Optional): Event availability status: 'free', 'busy', 'tentative', 'outOfOffice'
**Output:** Returns the updated event with new values
***
### List Calendars
##### `outlookcalendar.listCalendars`
Retrieve accessible calendars including your own and shared calendars
**Requires Confirmation:** No
**Parameters:**
* `calendarowner` (TEXT, Optional): Email address to view someone else's calendars. Leave empty for your own calendars
* `sortby` (TEXT, Optional): Sort calendars by field
* `includefields` (TEXT, Optional): Comma-separated list of fields to include
* `top` (TEXT, Optional): Maximum number of calendars to return
**Output:** Returns array of calendars with their details
***
### List Calendar Events
##### `outlookcalendar.listCalendarEvents`
List calendar events with filtering by subject, date, organizer, and status. Always include the user's timezone
**Requires Confirmation:** No
**Parameters:**
* `timezone` (TEXT, Optional): IANA timezone identifier for proper time display
* `datefrom` (TEXT, Optional): Start date for events in ISO format
* `dateto` (TEXT, Optional): End date for events in ISO format
* `subjectcontains` (TEXT, Optional): Filter events where subject contains this text
* `organizeremail` (TEXT, Optional): Filter events by organizer email address
* `calendarowner` (TEXT, Optional): Email address to view someone else's calendar events
* `calendarid` (TEXT, Optional): Specific calendar ID to filter events from
* `showas` (TEXT, Optional): Filter by availability status: 'free', 'busy', 'tentative', 'outOfOffice'
* `includecancelled` (TEXT, Optional): Include cancelled events in results
* `sortby` (TEXT, Optional): Sort events by field
* `maxEvents` (TEXT, Optional): Maximum number of events to return
**Output:** Returns array of events matching the filter criteria
***
### Get Event
##### `outlookcalendar.getEvent`
Retrieve a specific event by its ID
**Requires Confirmation:** No
**Parameters:**
* `eventId` (TEXT, Required): The unique ID of the event
**Output:** Returns the event details
***
### Delete Event
##### `outlookcalendar.deleteEvent`
Delete calendar events by ID or by filtering criteria
**Requires Confirmation:** Yes
**Parameters:**
* `eventId` (TEXT, Optional): The unique ID of a specific event to delete
* `attendeeEmail` (TEXT, Optional): Delete all events involving this person as attendee or organizer
* `subjectContains` (TEXT, Optional): Delete all events where subject contains this text
* `dateFrom` (TEXT, Optional): Start date for deletion range in ISO format
* `dateTo` (TEXT, Optional): End date for deletion range in ISO format
* `calendarId` (TEXT, Optional): The unique ID of the calendar to delete events from
* `maxEvents` (TEXT, Optional): Maximum number of events to delete (safety limit)
**Output:** Returns confirmation of deleted events
***
### Get Calendar Settings
##### `outlookcalendar.getCalendarSettings`
Retrieve calendar settings including timezone, working hours, and other configurations
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns calendar settings including timezone and working hours
***
### Find Meeting Times
##### `outlookcalendar.findMeetingTimes`
Find available meeting times for yourself and additional participants
**Requires Confirmation:** No
**Parameters:**
* `participantEmail` (TEXT, Optional): Email address of one additional participant
* `additionalParticipants` (TEXT, Optional): Comma-separated email addresses for multiple participants
* `startTime` (TEXT, Required): Specify the earliest possible start time for the search, formatted in ISO 8601 Date-Time format
* `endTime` (TEXT, Required): Latest possible end time in ISO 8601 format
* `duration` (TEXT, Required): Specify the duration of the available slot using ISO 8601 format
* `timezone` (TEXT, Required): The timezone of the outlook calendar of the user
* `minimumAttendeePercentage` (NUMBER, Optional): Minimum percentage of attendees that must be available (1-100)
* `maxSuggestions` (NUMBER, Optional): Maximum number of meeting time suggestions to return (1-20)
* `activityDomain` (TEXT, Optional): Time search scope: 'work' (business hours), 'personal' (includes weekends), 'unrestricted' (all hours)
**Output:** Returns array of available meeting times
#### Triggers
***
### New Event Matching Search
##### `outlookcalendar.newEventMatchingSearch`
Triggers when new calendar events matching the specified search query are created
**Requires Confirmation:** No
**Parameters:**
* `calendarNames` (TEXT, Optional): Comma-separated list of calendar names to monitor for new events
* `searchQuery` (TEXT, Required): Text to search for in event subjects
* `daysToInclude` (NUMBER, Optional): Number of days in the future to look for events. Default is 30 days
**Output:** Returns the result of the operation
***
### New Event
##### `outlookcalendar.newEvent`
Triggers when new calendar events are created in specified calendars
**Requires Confirmation:** No
**Parameters:**
* `calendarNames` (TEXT, Optional): Comma-separated list of calendar names to monitor
* `daysToInclude` (NUMBER, Optional): Number of days in the future to look for events
**Output:** Returns the result of the operation
***
## Common Use Cases
Manage and organize your Outlook Calendar data
Automate workflows with Outlook Calendar
Generate insights and reports
Connect Outlook Calendar with other tools
## Best Practices
**Getting Started:**
1. **Prerequisite:** A Microsoft Admin must [approve the Langdock application](/administration/microsoft-admin-approval) in your Microsoft workspace once.
2. Enable the Outlook Calendar integration in your workspace settings
3. Authenticate using OAuth
4. Test the connection with a simple read operation
5. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Outlook Calendar integration, contact [support@langdock.com](mailto:support@langdock.com)
# Outlook Email
Source: https://docs.langdock.com/administration/integrations/outlook-email
Microsoft's email service for personal and business communication
## Overview
Microsoft's email service for personal and business communication. Through Langdock's integration, you can access and manage Outlook Email directly from your conversations.
**Authentication:** OAuth\
**Category:** Microsoft 365\
**Availability:** All workspace plans
## Available Actions
### Search Own Emails
##### `outlookemail.searchOwnEmails`
Search emails in your own mailbox including subfolders
**Requires Confirmation:** No
**Parameters:**
* `searchQuery` (TEXT, Optional): Full-text search using Microsoft Graph \$search parameter. Automatically ordered by relevance
* `folderId` (TEXT, Optional): ID of specific folder to search in your mailbox
* `top` (NUMBER, Optional): Maximum number of emails to return (default: 50, maximum: 1000)
* `includeFields` (TEXT, Optional): Comma-separated list of additional fields to include in response
* `includeAttachmentDetails` (BOOLEAN, Optional): Include detailed attachment information in the response
* `senderEmail` (TEXT, Optional): Filter by exact sender email address. Cannot be used with search queries
* `subjectContains` (TEXT, Optional): Filter emails where subject contains this text (case-insensitive). Cannot be used with search queries
* `dateFrom` (TEXT, Optional): Filter emails received on or after this date (ISO 8601 format). Cannot be used with search queries
* `dateTo` (TEXT, Optional): Filter emails received on or before this date (ISO 8601 format). Cannot be used with search queries
* `isRead` (SELECT, Optional): Filter by read status. Cannot be used with search queries
* `isFlagged` (SELECT, Optional): Filter by flagged status. Cannot be used with search queries
* `sortBy` (SELECT, Optional): Custom sort order for results. Default: 'Newest first'
**Output:** Returns array of emails with their details including subject, body, sender, recipients, and metadata
***
### Draft Reply to Message
##### `outlookemail.draftReplytoMessage`
Create a draft reply to an existing email. Use the toggle to choose Reply (sender only) or Reply all (sender and all recipients)
**Requires Confirmation:** Yes
**Parameters:**
* `messageId` (TEXT, Required): ID of the message to reply to
* `body` (MULTI\_LINE\_TEXT, Optional): Content of the reply. If HTML is enabled, interpreted as HTML; otherwise sent as a plain text comment
* `isHtml` (BOOLEAN, Optional): When enabled, sends the reply body as HTML using message.body; otherwise sends as a simple text comment
* `replyAll` (BOOLEAN, Optional): When enabled, sends the reply to the sender and all original recipients
* `outlookTimezone` (TEXT, Optional): Optional Prefer header to set timezone for the reply context
**Output:** Returns the created draft reply
***
### Send Email
##### `outlookemail.sendEmail`
Sends emails on the user's behalf via the Outlook API
**Requires Confirmation:** Yes
**Parameters:**
* `toRecipients` (TEXT, Optional): Comma-separated list of email addresses for primary recipients
* `ccRecipients` (TEXT, Optional): Comma-separated list of email addresses for CC recipients
* `bccRecipients` (TEXT, Optional): Comma-separated list of email addresses for BCC recipients
* `subject` (TEXT, Optional): Subject of the email
* `body` (MULTI\_LINE\_TEXT, Optional): The content of the email
* `isHtml` (BOOLEAN, Optional): Send email as HTML format. When enabled, the body content will be interpreted as HTML
**Output:** Returns the sent email with ID and details
***
### Create Draft
##### `outlookemail.createDraft`
Creates email drafts on the user's behalf via the Outlook API with support for multiple recipients, CC, and BCC
**Requires Confirmation:** Yes
**Parameters:**
* `toRecipients` (TEXT, Optional): Comma-separated list of email addresses for primary recipients
* `ccRecipients` (TEXT, Optional): Comma-separated list of email addresses for CC recipients
* `bccRecipients` (TEXT, Optional): Comma-separated list of email addresses for BCC recipients
* `subject` (TEXT, Optional): Subject of the email
* `body` (MULTI\_LINE\_TEXT, Optional): The content of the email
* `isHtml` (BOOLEAN, Optional): Send email as HTML format. When enabled, the body content will be interpreted as HTML
**Output:** Returns the created draft with ID and details
***
### Search Shared Email Folders
##### `outlookemail.searchSharedEmailFolders`
Search emails in shared email folders from other users. Requires read access to mailbox and specific folders
**Requires Confirmation:** No
**Parameters:**
* `folderId` (TEXT, Required): ID of the shared folder to search
* `sharedFolderOwner` (TEXT, Required): Email address of the user who owns the shared folder
* `senderEmail` (TEXT, Optional): Filter by exact sender email address
* `subjectContains` (TEXT, Optional): Filter emails where subject contains this text (case-insensitive)
* `dateFrom` (TEXT, Optional): Filter emails received on or after this date (ISO 8601 format)
* `dateTo` (TEXT, Optional): Filter emails received on or before this date (ISO 8601 format)
* `isRead` (SELECT, Optional): Filter by read status
* `sortBy` (SELECT, Optional): Sort order for results. Default: 'Newest first'
* `top` (NUMBER, Optional): Maximum number of emails to return per page
* `includeFields` (TEXT, Optional): Comma-separated list of additional fields to include in response
* `includeAttachmentDetails` (BOOLEAN, Optional): Include detailed attachment information in the response
**Output:** Returns array of emails from shared folders
***
### List Contacts
##### `outlookemail.listContacts`
List contacts from your default Contacts or a specific contact folder
**Requires Confirmation:** No
**Parameters:**
* `contactFolderId` (TEXT, Optional): When provided, lists contacts from this contact folder
* `top` (NUMBER, Optional): Maximum number of contacts to return (default: 50, max: 999)
* `orderBy` (SELECT, Optional): Sort contacts by a specific field
**Output:** Returns array of contacts with their details
***
### Get Contact
##### `outlookemail.getContact`
Find and retrieve a specific contact by name, company, or contact ID
**Requires Confirmation:** No
**Parameters:**
* `searchName` (TEXT, Optional): Search for contacts by name (supports partial matches)
* `contactId` (TEXT, Optional): Exact contact ID for precise lookup
* `companyName` (TEXT, Optional): Filter by company name to help narrow down results
* `maxResults` (NUMBER, Optional): Maximum number of contacts to return when searching by name
**Output:** Returns array of contacts matching the search criteria
***
### Create Contact
##### `outlookemail.createContact`
Create a new contact in your Outlook address book or a specific contact folder
**Requires Confirmation:** Yes
**Parameters:**
* `contactFolderId` (TEXT, Optional): Optional. Create the contact inside a specific contact folder
* `givenName` (TEXT, Optional): First name of the contact
* `surname` (TEXT, Optional): Last name of the contact
* `displayName` (TEXT, Optional): Full display name for the contact
* `emailAddresses` (TEXT, Required): Comma-separated list of email addresses for the contact
* `businessPhones` (TEXT, Optional): Comma-separated list of business phone numbers
* `homePhones` (TEXT, Optional): Comma-separated list of home phone numbers
* `mobilePhone` (TEXT, Optional): Mobile phone number
* `companyName` (TEXT, Optional): Company or organization
* `jobTitle` (TEXT, Optional): Contact's job title
* `department` (TEXT, Optional): Department or team
* `businessAddressStreet` (TEXT, Optional): Street of the business address
* `businessAddressCity` (TEXT, Optional): City of the business address
* `businessAddressState` (TEXT, Optional): State/Province of the business address
* `businessAddressPostalCode` (TEXT, Optional): Postal/ZIP code of the business address
* `businessAddressCountryOrRegion` (TEXT, Optional): Country/Region of the business address
* `homeAddressStreet` (TEXT, Optional): Street of the home address
* `homeAddressCity` (TEXT, Optional): City of the home address
* `homeAddressState` (TEXT, Optional): State/Province of the home address
* `homeAddressPostalCode` (TEXT, Optional): Postal/ZIP code of the home address
* `homeAddressCountryOrRegion` (TEXT, Optional): Country/Region of the home address
* `categories` (TEXT, Optional): Comma-separated list of categories to assign to the contact
**Output:** Returns the created contact with ID and details
***
### List Contact Folders
##### `outlookemail.listContactFolders`
List all contact folders (default Contacts and subfolders)
**Requires Confirmation:** No
**Parameters:**
* `includeSubfolders` (BOOLEAN, Optional): Include all nested child contact folders (default: true)
* `sortByPath` (BOOLEAN, Optional): Sort folders alphabetically by their full path
**Output:** Returns array of contact folders with their details
***
### List Folders
##### `outlookemail.listFolders`
Lists all mail folders and subfolders in the user's mailbox and shared folders from other users with their IDs and hierarchy
**Requires Confirmation:** No
**Parameters:**
* `includeSubfolders` (BOOLEAN, Optional): Include all subfolders in the hierarchy (default: true)
* `includeHidden` (BOOLEAN, Optional): Include hidden folders like Clutter
* `sortByPath` (BOOLEAN, Optional): Sort folders alphabetically by their full path
* `filterType` (SELECT, Optional): Filter folders by type
* `sharedFolderUsers` (TEXT, Optional): Comma-separated list of email addresses of users who have granted you access to their mailbox
**Output:** Returns array of folders with their hierarchy and details
#### Triggers
***
### New Email
##### `outlookemail.newEmail`
Triggers when a new email is received
**Requires Confirmation:** No
**Parameters:**
* `includeAttachments` (BOOLEAN, Optional): When enabled, includes attachment details in the trigger output
**Output:** Returns the operation result
***
### New Email Matching Search
##### `outlookemail.newEmailMatchingSearch`
Triggers when new emails matching the specified search query are received
**Requires Confirmation:** No
**Parameters:**
* `includeAttachments` (BOOLEAN, Optional): When enabled, includes attachment details in the trigger output
* `searchQuery` (TEXT, Required): Microsoft Graph search query to filter emails
**Output:** Returns the operation result
***
### New Email in Shared Inbox
##### `outlookemail.newEmailinSharedInbox`
Triggers when new emails are received in specified shared inboxes
**Requires Confirmation:** No
**Parameters:**
* `sharedFolderOwners` (TEXT, Required): Comma-separated list of user emails whose shared inboxes to monitor
* `folderId` (TEXT, Optional): Folder to monitor within each shared inbox (default: inbox)
* `includeAttachments` (BOOLEAN, Optional): When enabled, includes attachment details in the trigger output
**Output:** Returns the operation result
***
## Common Use Cases
Manage and organize your Outlook Email data
Automate workflows with Outlook Email
Generate insights and reports
Connect Outlook Email with other tools
## Best Practices
**Getting Started:**
1. **Prerequisite:** A Microsoft Admin must [approve the Langdock application](/administration/microsoft-admin-approval) in your Microsoft workspace once.
2. Enable the Outlook Email integration in your workspace settings
3. Authenticate using OAuth
4. Test the connection with a simple read operation
5. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Outlook Email integration, contact [support@langdock.com](mailto:support@langdock.com)
# Personio
Source: https://docs.langdock.com/administration/integrations/personio
All-in-one HR software for managing employees, attendance, time accounts, and performance
## Overview
All-in-one HR software for managing employees, attendance, time accounts, and performance. Through Langdock's integration, you can access and manage Personio directly from your conversations.
**Authentication:** API Key\
**Category:** CRM & Customer Support\
**Availability:** All workspace plans
## Available Actions
### List persons
##### `personio.listpersons`
Get a list of persons with optional filters
**Requires Confirmation:** No
**Parameters:**
* `limit` (NUMBER, Optional): Number of persons to return per page (1-50, default: 10)
* `cursor` (TEXT, Optional): Pagination cursor for next page of results
* `id` (TEXT, Optional): Filter by specific person ID
* `email` (TEXT, Optional): Filter by email address
* `first_name` (TEXT, Optional): Filter by first name
* `last_name` (TEXT, Optional): Filter by last name
* `preferred_name` (TEXT, Optional): Filter by preferred name
* `created_at` (TEXT, Optional): Filter by creation date (YYYY-MM-DD)
* `created_at_gt` (TEXT, Optional): Filter persons created after this date (YYYY-MM-DD)
* `created_at_lt` (TEXT, Optional): Filter persons created before this date (YYYY-MM-DD)
* `updated_at` (TEXT, Optional): Filter by updated date (YYYY-MM-DD)
* `updated_at_gt` (TEXT, Optional): Filter persons updated after this date (YYYY-MM-DD)
* `updated_at_lt` (TEXT, Optional): Filter persons updated before this date (YYYY-MM-DD)
**Output:** Returns a list of persons with their details including ID, name, email, and employment information
***
### Get person
##### `personio.getperson`
Retrieve a single person by ID
**Requires Confirmation:** No
**Parameters:**
* `id` (TEXT, Required): The unique identifier of the employee (e.g. "12345678")
**Output:** Returns the specific person details
***
### Create person
##### `personio.createperson`
Create a new person and employment
**Requires Confirmation:** Yes
**Parameters:**
* `first_name` (TEXT, Required): First name of the employee
* `last_name` (TEXT, Required): Last name of the employee
* `email` (TEXT, Optional): Email address of the employee. Must be unique across all employees
* `preferred_name` (TEXT, Optional): The preferred name of the employee, if relevant
* `gender` (TEXT, Optional): Gender of the employee (e.g. MALE, FEMALE, DIVERSE)
* `language_code` (TEXT, Optional): Main language of the employee (e.g. 'de' for German, 'en' for English)
* `custom_attributes` (MULTI\_LINE\_TEXT, Optional): Custom attributes as JSON array or object
* `employments` (MULTI\_LINE\_TEXT, Optional): Employment details as JSON array
**Output:** Returns the created person with their ID and details
***
### Delete person
##### `personio.deleteperson`
Delete a person
**Requires Confirmation:** Yes
**Parameters:**
* `id` (TEXT, Required): The unique identifier of the employee to delete (e.g. "12345678")
**Output:** Confirmation of deletion
***
### List employments
##### `personio.listemployments`
List employments of a given person
**Requires Confirmation:** No
**Parameters:**
* `person_id` (TEXT, Required): The unique identifier of the person (e.g. "12345678")
* `limit` (NUMBER, Optional): Number of employments to return per page (1-50, default: 10)
* `cursor` (TEXT, Optional): Pagination cursor for next page of results
* `id` (TEXT, Optional): Filter by specific employment ID
* `updated_at` (TEXT, Optional): Filter by updated date (YYYY-MM-DD)
* `updated_at_gt` (TEXT, Optional): Filter employments updated after this date (YYYY-MM-DD)
* `updated_at_lt` (TEXT, Optional): Filter employments updated before this date (YYYY-MM-DD)
**Output:** Returns a list of employments for the specified person
***
### Get employment
##### `personio.getemployment`
Retrieve a single employment by ID
**Requires Confirmation:** No
**Parameters:**
* `person_id` (TEXT, Required): The unique identifier of the person (e.g. "12345678")
* `id` (TEXT, Required): The unique identifier of the employment (e.g. "98765432")
**Output:** Returns the specific employment details
***
### Update employment
##### `personio.updateemployment`
Update an employment record
**Requires Confirmation:** Yes
**Parameters:**
* `person_id` (TEXT, Required): The unique identifier of the person (e.g. "12345678")
* `employment_id` (TEXT, Required): The unique identifier of the employment to update
* `supervisor` (MULTI\_LINE\_TEXT, Optional): Supervisor object as JSON
* `office` (MULTI\_LINE\_TEXT, Optional): Office object as JSON
* `org_units` (MULTI\_LINE\_TEXT, Optional): Organization units (department/team) as JSON array
* `legal_entity` (MULTI\_LINE\_TEXT, Optional): Legal entity object as JSON
* `position` (MULTI\_LINE\_TEXT, Optional): Position object as JSON
* `status` (TEXT, Optional): Employment status (e.g. ACTIVE, INACTIVE)
* `employment_start_date` (TEXT, Optional): When the employment contract starts (YYYY-MM-DD)
* `type` (TEXT, Optional): Type of employment (e.g. INTERNAL, EXTERNAL)
* `contract_end_date` (TEXT, Optional): When the employment contract ends, if temporary (YYYY-MM-DD)
* `probation_end_date` (TEXT, Optional): When the probation period ends (YYYY-MM-DD)
* `probation_period_length` (NUMBER, Optional): Length of probation period in months
* `weekly_working_hours` (NUMBER, Optional): Number of hours worked weekly
* `full_time_weekly_working_hours` (NUMBER, Optional): Hours per week considered full time for this employment
* `cost_centers` (MULTI\_LINE\_TEXT, Optional): Weight distribution between cost centers as JSON array with percentages
**Output:** Returns the updated employment details
***
### Update person
##### `personio.updateperson`
Update a person's information
**Requires Confirmation:** Yes
**Parameters:**
* `id` (TEXT, Required): The unique identifier of the employee to update (e.g. "12345678")
* `email` (TEXT, Optional): Email address of the employee. Must be unique across all employees
* `first_name` (TEXT, Optional): First name of the employee
* `last_name` (TEXT, Optional): Last name of the employee
* `preferred_name` (TEXT, Optional): The preferred name of the employee, if relevant
* `gender` (TEXT, Optional): Gender of the employee (e.g. MALE, FEMALE, DIVERSE)
* `language_code` (TEXT, Optional): Main language of the employee (e.g. 'de' for German, 'en' for English)
* `custom_attributes` (MULTI\_LINE\_TEXT, Optional): Custom attributes as JSON array or object
**Output:** Returns the updated person details
***
### Get attendance
##### `personio.getattendance`
Get attendance records with date range and employee filtering
**Requires Confirmation:** No
**Parameters:**
* `limit` (NUMBER, Optional): Maximum number of attendance periods to return (1-100, default: 100)
* `cursor` (TEXT, Optional): Pagination cursor for next page of results
* `id` (TEXT, Optional): Filter by attendance period IDs (comma-separated for multiple)
* `person_id` (TEXT, Optional): Filter by person IDs (comma-separated for multiple)
* `start_gte` (TEXT, Optional): Filter periods starting from this date-time (ISO-8601)
* `start_lte` (TEXT, Optional): Filter periods starting before or at this date-time (ISO-8601)
* `end_lte` (TEXT, Optional): Filter periods ending before or at this date-time (ISO-8601)
* `end_gte` (TEXT, Optional): Filter periods ending after or at this date-time (ISO-8601)
* `updated_at_gte` (TEXT, Optional): Filter periods updated after or at this date-time (ISO-8601)
* `updated_at_lte` (TEXT, Optional): Filter periods updated before or at this date-time (ISO-8601)
* `status` (TEXT, Optional): Filter by attendance period status
**Output:** Returns a list of attendance records with details
***
### Get attendance period
##### `personio.getattendanceperiod`
Retrieve a single attendance period by ID
**Requires Confirmation:** No
**Parameters:**
* `id` (TEXT, Required): The ID of the attendance period to retrieve
**Output:** Returns the specific attendance period details
***
### Create attendance
##### `personio.createattendance`
Create a new attendance entry for an employee
**Requires Confirmation:** Yes
**Parameters:**
* `person_id` (TEXT, Required): The person's unique identifier
* `type` (TEXT, Required): Attendance period type: WORK or BREAK
* `start` (MULTI\_LINE\_TEXT, Required): Start date/time as JSON: `{"date_time": "2024-01-01T09:00:00"}`
* `end` (MULTI\_LINE\_TEXT, Optional): End date/time as JSON: `{"date_time": "2024-01-01T17:00:00"}`
* `comment` (TEXT, Optional): Optional comment for the attendance period
* `project_id` (TEXT, Optional): Project ID (only for WORK periods, must be ACTIVE)
* `skip_approval` (BOOLEAN, Optional): Skip any approval that this request would otherwise require (default: false)
**Output:** Returns the created attendance record
***
### Update attendance period
##### `personio.updateattendanceperiod`
Update an attendance period by ID
**Requires Confirmation:** Yes
**Parameters:**
* `id` (TEXT, Required): The ID of the attendance period to update
* `type` (TEXT, Optional): Attendance period type: WORK or BREAK
* `start` (MULTI\_LINE\_TEXT, Optional): Start date/time as JSON: `{"date_time": "2024-01-01T09:00:00"}`
* `end` (MULTI\_LINE\_TEXT, Optional): End date/time as JSON: `{"date_time": "2024-01-01T17:00:00"}`
* `comment` (TEXT, Optional): Optional comment for the attendance period
* `project_id` (TEXT, Optional): Project ID (only for WORK periods, must be ACTIVE, or null to remove)
* `skip_approval` (BOOLEAN, Optional): Skip any approval that this request would otherwise require (default: false)
**Output:** Returns the updated attendance period details
***
### Delete attendance period
##### `personio.deleteattendanceperiod`
Delete an attendance period by ID
**Requires Confirmation:** Yes
**Parameters:**
* `id` (TEXT, Required): The ID of the attendance period to delete
**Output:** Confirmation of deletion
***
### Create absence period
##### `personio.createabsenceperiod`
Creates a new absence period
**Requires Confirmation:** Yes
**Parameters:**
* `person_id` (TEXT, Required): The person's unique identifier
* `absence_type_id` (TEXT, Required): The ID of the absence type (UUID format)
* `starts_from` (MULTI\_LINE\_TEXT, Required): Start of absence as JSON: `{"date_time": "2025-12-29T00:00:00", "type": "FIRST_HALF"}`
* `ends_at` (MULTI\_LINE\_TEXT, Optional): End of absence as JSON: `{"date_time": "2026-01-01T00:00:00", "type": "SECOND_HALF"}`
* `comment` (TEXT, Optional): Optional comment for the absence period
* `skip_approval` (BOOLEAN, Optional): Skip any approval that this request would otherwise require (default: false)
**Output:** Returns the created absence period details
***
### List compensation types
##### `personio.listcompensationtypes`
Returns a list of compensation types including one-time and recurring types
**Requires Confirmation:** No
**Parameters:**
* `limit` (NUMBER, Optional): Number of compensation types to return per page (1-100, default: 100)
* `cursor` (TEXT, Optional): Pagination cursor for next page of results
**Output:** Returns a list of compensation types
***
### Create compensation type
##### `personio.createcompensationtype`
Creates a new compensation type that can be used when creating compensations
**Requires Confirmation:** Yes
**Parameters:**
* `name` (TEXT, Required): Name of the compensation type
* `category` (TEXT, Required): Payout frequency: ONE\_TIME or RECURRING
**Output:** Returns the created compensation type details
***
### Create compensation
##### `personio.createcompensation`
Creates a compensation for an employee (one-time, recurring, fixed, or hourly). Bonuses not supported
**Requires Confirmation:** Yes
**Parameters:**
* `person_id` (TEXT, Required): The person ID or person object as JSON `{"id": "12345678"}`
* `type_id` (TEXT, Required): The compensation type ID or type object as JSON `{"id": "uuid"}`
* `value` (NUMBER, Required): Amount in currency's numeric unit with up to 2 decimal places
* `effective_from` (TEXT, Required): The effective start date of the compensation (YYYY-MM-DD)
* `interval` (TEXT, Optional): Payout interval: MONTHLY, YEARLY (mandatory for RECURRING, ignored for ONE\_TIME)
* `comment` (TEXT, Optional): Optional comment about this compensation
**Output:** Returns the created compensation details
***
### List legal entities
##### `personio.listlegalentities`
Returns a list of legal entities for the company, sorted by creation date
**Requires Confirmation:** No
**Parameters:**
* `id` (TEXT, Optional): Filter by one or more legal entity IDs (comma-separated for multiple)
* `country` (TEXT, Optional): Filter by country codes (comma-separated for multiple, e.g. DE,US)
* `limit` (NUMBER, Optional): Number of legal entities to return per page (1-100, default: 20)
* `cursor` (TEXT, Optional): Pagination cursor for next page of results
**Output:** Returns a list of legal entities
***
### Get legal entity
##### `personio.getlegalentity`
Retrieves a single legal entity by ID
**Requires Confirmation:** No
**Parameters:**
* `id` (TEXT, Required): The ID of the legal entity to retrieve
**Output:** Returns the specific legal entity details
***
### Get org unit
##### `personio.getorgunit`
Retrieves an organizational unit (team or department) by ID. Get org unit IDs from list\_employments or get\_employment responses
**Requires Confirmation:** No
**Parameters:**
* `id` (TEXT, Required): The ID of the Org Unit to retrieve. Get this from list\_employments or employment records
* `type` (TEXT, Required): The type of the Org Unit (e.g. team or department)
* `include_parent_chain` (BOOLEAN, Optional): Include the parent org unit chain in the response (default: false)
**Output:** Returns the organizational unit details
***
### List absence periods
##### `personio.listabsenceperiods`
Returns a list of absence periods with pagination and filtering
**Requires Confirmation:** No
**Parameters:**
* `limit` (NUMBER, Optional): Maximum number of absence periods to return (1-100, default: 100)
* `cursor` (TEXT, Optional): Pagination cursor for next page of results
* `id` (TEXT, Optional): Filter by one or more absence period IDs
* `absence_type_id` (TEXT, Optional): Filter by one or more absence type IDs
* `person_id` (TEXT, Optional): Filter by one or more person IDs
* `starts_from_gte` (TEXT, Optional): Filter periods starting from this date-time (inclusive, ISO-8601)
* `starts_from_lte` (TEXT, Optional): Filter periods starting before or at this date-time (ISO-8601)
* `ends_at_lte` (TEXT, Optional): Filter periods ending before or at this date-time (ISO-8601)
* `ends_at_gte` (TEXT, Optional): Filter periods ending after or at this date-time (ISO-8601)
* `updated_at_gte` (TEXT, Optional): Filter periods updated after or at this date-time (ISO-8601)
* `updated_at_lte` (TEXT, Optional): Filter periods updated before or at this date-time (ISO-8601)
**Output:** Returns a list of absence periods
***
### Get absence period
##### `personio.getabsenceperiod`
Retrieves an absence period by ID. Get absence period IDs from list\_absence\_periods or create\_absence responses
**Requires Confirmation:** No
**Parameters:**
* `id` (TEXT, Required): The ID of the absence period to retrieve. Get this from list\_absence\_periods or create\_absence responses
**Output:** Returns the specific absence period details
***
### Update absence period
##### `personio.updateabsenceperiod`
Updates an absence period by ID
**Requires Confirmation:** Yes
**Parameters:**
* `id` (TEXT, Required): The ID of the absence period to update
* `starts_from` (MULTI\_LINE\_TEXT, Optional): Start of absence period as JSON object: `{"date_time": "2025-12-29T00:00:00", "type": "FIRST_HALF"}`
* `ends_at` (MULTI\_LINE\_TEXT, Optional): End of absence period as JSON object: `{"date_time": "2026-01-01T00:00:00", "type": "SECOND_HALF"}`
* `comment` (TEXT, Optional): Optional comment for the absence period
* `skip_approval` (BOOLEAN, Optional): Skip any approval that this update would otherwise require (default: false)
**Output:** Returns the updated absence period details
***
### Delete absence period
##### `personio.deleteabsenceperiod`
Deletes an absence period by ID
**Requires Confirmation:** Yes
**Parameters:**
* `id` (TEXT, Required): The ID of the absence period to delete
**Output:** Confirmation of deletion
***
### Get absence period breakdowns
##### `personio.getabsenceperiodbreakdowns`
Retrieves daily breakdowns for an absence period
**Requires Confirmation:** No
**Parameters:**
* `id` (TEXT, Required): The ID of the absence period
* `limit` (NUMBER, Optional): Number of breakdown days to return (1-28, default: 28)
* `cursor` (TEXT, Optional): Pagination cursor for next page of results
**Output:** Returns daily breakdowns for the absence period
***
### Get time off types
##### `personio.gettimeofftypes`
Get all available time off types
**Requires Confirmation:** No
**Parameters:**
* `limit` (NUMBER, Optional): Maximum number of absence types to return (1-100, default: 100)
* `cursor` (TEXT, Optional): Pagination cursor for next page of results
**Output:** Returns a list of time off types
***
### Get absence type
##### `personio.getabsencetype`
Retrieves an absence type by ID
**Requires Confirmation:** No
**Parameters:**
* `id` (TEXT, Required): The ID of the absence type (UUID format)
**Output:** Returns the specific absence type details
***
### List documents
##### `personio.listdocuments`
Lists the metadata of documents belonging to the provided owner ID
**Requires Confirmation:** No
**Parameters:**
* `owner_id` (TEXT, Required): The ID of the owner of the documents
* `category_id` (TEXT, Optional): The ID of the category in which the documents belong
* `created_at_gte` (TEXT, Optional): Filter documents created on or after this date (YYYY-MM-DD)
* `created_at_lt` (TEXT, Optional): Filter documents created before this date (YYYY-MM-DD)
* `limit` (NUMBER, Optional): Number of documents to return (1-200, default: 100)
* `cursor` (TEXT, Optional): Pagination cursor for next page of results
**Output:** Returns a list of documents
***
### Delete document
##### `personio.deletedocument`
Deletes a document with the provided document ID
**Requires Confirmation:** Yes
**Parameters:**
* `document_id` (TEXT, Required): The ID of the document to delete
**Output:** Confirmation of deletion
***
### List compensations
##### `personio.listcompensations`
Returns payroll compensations including salary, hourly, one-time, recurring, and bonuses
**Requires Confirmation:** No
**Parameters:**
* `start_date` (TEXT, Optional): Start date for compensations (YYYY-MM-DD). Duration with end\_date must be ≤ 1 month
* `end_date` (TEXT, Optional): End date for compensations (YYYY-MM-DD). Duration with start\_date must be ≤ 1 month
* `person_id` (TEXT, Optional): Filter by one or more person IDs (comma-separated for multiple)
* `legal_entity_id` (TEXT, Optional): Filter by one or more legal entity IDs (comma-separated for multiple)
* `limit` (NUMBER, Optional): Number of compensations to return per page (1-100, default: 100)
* `cursor` (TEXT, Optional): Pagination cursor for next page of results
**Output:** Returns a list of compensations
***
### Search person by email
##### `personio.searchpersonbyemail`
Find a person by their email address
**Requires Confirmation:** No
**Parameters:**
* `email` (TEXT, Required): The email address of the employee to search for
**Output:** Returns the person details if found
***
### Get time off balance
##### `personio.gettimeoffbalance`
Get the time off balance for a person
**Requires Confirmation:** No
**Parameters:**
* `employeeId` (TEXT, Required): The unique identifier of the employee
**Output:** Returns the time off balance details
***
### Get custom attributes
##### `personio.getcustomattributes`
Get the list of custom attributes defined in Personio
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns a list of custom attributes
***
### List applications
##### `personio.listapplications`
Get a list of recruiting applications with optional filters
**Requires Confirmation:** No
**Parameters:**
* `limit` (NUMBER, Optional): Number of applications to return (1-200, default: 100)
* `cursor` (TEXT, Optional): Pagination cursor for next page of results
* `updated_at_lt` (TEXT, Optional): Return applications updated before this date/time (ISO 8601 format). Cannot be used with 'Updated after'
* `updated_at_gt` (TEXT, Optional): Return applications updated after this date/time (ISO 8601 format). Cannot be used with 'Updated before'
* `candidate_email` (TEXT, Optional): Filter applications by candidate email address
**Output:** Returns a list of recruiting applications
***
### Get application
##### `personio.getapplication`
Retrieve a recruiting application by ID
**Requires Confirmation:** No
**Parameters:**
* `id` (TEXT, Required): The unique identifier of the application
**Output:** Returns the specific application details
***
### Get application stage transitions
##### `personio.getapplicationstagetransitions`
Get the history of stage transitions for a recruiting application
**Requires Confirmation:** No
**Parameters:**
* `id` (TEXT, Required): The unique identifier of the application
**Output:** Returns the stage transition history
***
### List candidates
##### `personio.listcandidates`
Get a list of recruiting candidates with optional filters
**Requires Confirmation:** Yes
**Parameters:**
* `limit` (NUMBER, Optional): Number of candidates to return (1-200, default: 100)
* `cursor` (TEXT, Optional): Pagination cursor for next page of results
* `updated_at_lt` (TEXT, Optional): Return candidates updated before this date/time (ISO 8601 format). Cannot be used with 'Updated after'
* `updated_at_gt` (TEXT, Optional): Return candidates updated after this date/time (ISO 8601 format). Cannot be used with 'Updated before'
* `email` (TEXT, Optional): Filter candidates by email address
**Output:** Returns a list of recruiting candidates
***
### Get candidate
##### `personio.getcandidate`
Retrieve a recruiting candidate by ID
**Requires Confirmation:** Yes
**Parameters:**
* `id` (TEXT, Required): The unique identifier of the candidate
**Output:** Returns the specific candidate details
***
### List jobs
##### `personio.listjobs`
Get a list of recruiting jobs with optional filters
**Requires Confirmation:** No
**Parameters:**
* `limit` (NUMBER, Optional): Number of jobs to return (1-200, default: 100)
* `cursor` (TEXT, Optional): Pagination cursor for next page of results
* `updated_at_lt` (TEXT, Optional): Return jobs updated before this date/time (ISO 8601 format). Cannot be used with 'Updated after'
* `updated_at_gt` (TEXT, Optional): Return jobs updated after this date/time (ISO 8601 format). Cannot be used with 'Updated before'
**Output:** Returns a list of recruiting jobs
***
### Get job
##### `personio.getjob`
Retrieve a recruiting job by ID
**Requires Confirmation:** No
**Parameters:**
* `id` (TEXT, Required): The unique identifier of the job
**Output:** Returns the specific job details
***
### List job categories
##### `personio.listjobcategories`
Get all recruiting job categories
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns a list of job categories
***
### Get job category
##### `personio.getjobcategory`
Retrieve a recruiting job category by ID
**Requires Confirmation:** No
**Parameters:**
* `id` (TEXT, Required): The unique identifier of the job category
**Output:** Returns the specific job category details
***
### List cost centers
##### `personio.listcostcenters`
Get a list of cost centers with filtering, sorting, and pagination
**Requires Confirmation:** Yes
**Parameters:**
* `id` (TEXT, Optional): Filter by one or more cost center IDs (comma-separated)
* `name` (TEXT, Optional): Filter by one or more cost center names (comma-separated)
* `sort` (TEXT, Optional): Sort results by field. Use field name for ascending (e.g., 'name') or minus sign for descending (e.g., '-name'). Options: id, -id, name, -name
* `limit` (NUMBER, Optional): Number of cost centers to return (1-100, default: 50)
* `cursor` (TEXT, Optional): Pagination cursor for next page of results
**Output:** Returns a list of cost centers
***
### List workplaces
##### `personio.listworkplaces`
Get a list of workplaces with filtering, sorting, and pagination
**Requires Confirmation:** No
**Parameters:**
* `id` (TEXT, Optional): Filter by one or more workplace IDs (comma-separated)
* `name` (TEXT, Optional): Filter by one or more workplace names (comma-separated)
* `sort` (TEXT, Optional): Sort results by field. Use field name for ascending (e.g., 'name') or minus sign for descending (e.g., '-name'). Options: id, -id, name, -name
* `limit` (NUMBER, Optional): Number of workplaces to return (1-100, default: 50)
* `cursor` (TEXT, Optional): Pagination cursor for next page of results
**Output:** Returns a list of workplaces
***
## Common Use Cases
Manage and organize your Personio data
Automate workflows with Personio
Generate insights and reports
Connect Personio with other tools
## Best Practices
**Getting Started:**
1. Enable the Personio integration in your workspace settings
2. Authenticate using API Key
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your API Key credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Personio integration, contact [support@langdock.com](mailto:support@langdock.com)
# Pinecone
Source: https://docs.langdock.com/administration/integrations/pinecone
The vector database for machine learning applications
## Overview
The vector database for machine learning applications. Through Langdock's integration, you can access and manage Pinecone directly from your conversations.
**Authentication:** API Key\
**Category:** AI & Search\
**Availability:** All workspace plans
## Available Actions
### Search Namespace
##### `pinecone.searchNamespace`
Searches the database for the most relevant information based on the query provided
**Requires Confirmation:** No
**Parameters:**
* `query` (VECTOR, Required): Vector query for similarity search
**Output:** Returns search results with matching vectors, metadata, and similarity scores
***
## Common Use Cases
Manage and organize your Pinecone data
Automate workflows with Pinecone
Generate insights and reports
Connect Pinecone with other tools
## Best Practices
**Getting Started:**
1. Enable the Pinecone integration in your workspace settings
2. Authenticate using API Key
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your API Key credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Pinecone integration, contact [support@langdock.com](mailto:support@langdock.com)
# Microsoft Planner
Source: https://docs.langdock.com/administration/integrations/planner
Microsoft's task management and collaboration platform for teams
## Overview
Microsoft's task management and collaboration platform for teams. Through Langdock's integration, you can access and manage Microsoft Planner directly from your conversations.
**Authentication:** OAuth\
**Category:** Microsoft 365\
**Availability:** All workspace plans
## Available Actions
### List Plans
##### `microsoftplanner.listPlans`
Retrieve all Planner plans that the user has access to
**Requires Confirmation:** No
**Parameters:**
* `groupId` (TEXT, Optional): ID of the Microsoft 365 group to get plans from. Leave empty to get plans from all accessible groups
* `includeDetails` (BOOLEAN, Optional): Include additional plan details such as categories and shared information
**Output:** Returns a list of plans with their details
***
### Create Plan
##### `microsoftplanner.createPlan`
Create a new Planner plan in a Microsoft 365 group
**Requires Confirmation:** No
**Parameters:**
* `groupId` (TEXT, Required): ID of the Microsoft 365 group where the plan will be created. The user must be a member of this group
* `title` (TEXT, Required): Title of the new plan
**Output:** Returns the created plan details
***
### Get Plan
##### `microsoftplanner.getPlan`
Retrieve details of a specific Planner plan
**Requires Confirmation:** No
**Parameters:**
* `planId` (TEXT, Required): ID of the plan to retrieve
* `includeDetails` (BOOLEAN, Optional): Include additional plan details such as categories and shared information
**Output:** Returns the plan details
***
### List Tasks
##### `microsoftplanner.listTasks`
Retrieve tasks from a Planner plan
**Requires Confirmation:** No
**Parameters:**
* `planId` (TEXT, Required): ID of the plan to get tasks from
* `includeDetails` (BOOLEAN, Optional): Include additional task details such as description, checklist, and references
**Output:** Returns a list of tasks from the plan
***
### Create Task
##### `microsoftplanner.createTask`
Create a new task in a Planner plan
**Requires Confirmation:** No
**Parameters:**
* `planId` (TEXT, Required): ID of the plan where the task will be created
* `planTitle` (TEXT, Optional): Optional. If no Plan ID is provided, resolve the plan by exact title across your accessible groups. If multiple matches exist, return an error with candidates
* `title` (TEXT, Required): Title of the new task
* `bucketId` (TEXT, Optional): ID of the bucket where the task will be placed. Leave empty to place in the default bucket
* `bucketName` (TEXT, Optional): Optional. If no Bucket ID is provided, resolve the bucket by exact name. If not found, the task will be created in the plan's default bucket
* `assignedToUserId` (TEXT, Optional): User ID to assign the task to. Leave empty to create an unassigned task
* `assignedToEmail` (TEXT, Optional): Email address (UPN) of the user to assign the task to. Leave empty to create an unassigned task
* `assignToMe` (BOOLEAN, Optional): If true and no other assignee is provided, assign the task to the current user
* `dueDateTime` (TEXT, Optional): ISO 8601 with timezone required (UTC 'Z' or offset), e.g., 2025-08-13T17:00:00Z or 2025-08-13T17:00:00+02:00
* `priority` (SELECT, Optional): Priority level of the task (Urgent, Important, Medium, Low)
**Output:** Returns the created task details
***
### Update Task
##### `microsoftplanner.updateTask`
Update an existing task in a Planner plan
**Requires Confirmation:** No
**Parameters:**
* `taskId` (TEXT, Required): ID of the task to update
* `title` (TEXT, Optional): New title for the task. Leave empty to keep current title
* `percentComplete` (NUMBER, Optional): Completion percentage (0-100). Set to 100 to mark as completed
* `dueDateTime` (TEXT, Optional): ISO 8601 with timezone required (UTC 'Z' or offset), e.g., 2025-08-13T17:00:00Z or 2025-08-13T17:00:00+02:00. Leave empty to keep current due date
* `priority` (SELECT, Optional): New priority level of the task. Leave empty to keep current priority
* `assignToMe` (BOOLEAN, Optional): If true and no other assignee is provided, assign the task to the current user
* `assignedToEmail` (TEXT, Optional): Email address (UPN) of the user to assign the task to. Leave empty to keep current assignments
* `assignedToUserId` (TEXT, Optional): User ID to assign the task to. Leave empty to keep current assignments
**Output:** Returns the updated task details
***
### List Buckets
##### `microsoftplanner.listBuckets`
Retrieve all buckets from a Planner plan
**Requires Confirmation:** No
**Parameters:**
* `planId` (TEXT, Required): ID of the plan to get buckets from
**Output:** Returns a list of buckets from the plan
***
### Create Bucket
##### `microsoftplanner.createBucket`
Create a new bucket in a Planner plan for organizing tasks
**Requires Confirmation:** No
**Parameters:**
* `planId` (TEXT, Required): ID of the plan where the bucket will be created
* `name` (TEXT, Required): Name of the new bucket
**Output:** Returns the created bucket details
***
### Get Task Comments
##### `microsoftplanner.getTaskComments`
Retrieve all comments from a Planner task's conversation thread
**Requires Confirmation:** No
**Parameters:**
* `taskId` (TEXT, Required): ID of the task to get comments from
* `includeEmpty` (BOOLEAN, Optional): Include comments that have no text content (system messages, etc.)
**Output:** Returns task comments
#### Triggers
***
### New Task
##### `microsoftplanner.newTask`
Triggers when a new task is created in a specified plan
**Requires Confirmation:** No
**Parameters:**
* `planId` (TEXT, Required): ID of the plan to monitor for new tasks
* `assignedToUserId` (TEXT, Optional): Only trigger for tasks assigned to this user ID. Leave empty to monitor all tasks
* `filterByAssigneeEmail` (TEXT, Optional): Only trigger for tasks assigned to this email address (UPN). Leave empty to monitor all tasks
**Output:** Returns the operation result
***
### Task Completed
##### `microsoftplanner.taskCompleted`
Triggers when a task is marked as completed (100% progress) in a specified plan
**Requires Confirmation:** No
**Parameters:**
* `planId` (TEXT, Required): ID of the plan to monitor for completed tasks
**Output:** Returns the operation result
***
## Common Use Cases
Manage and organize your Microsoft Planner data
Automate workflows with Microsoft Planner
Generate insights and reports
Connect Microsoft Planner with other tools
## Best Practices
**Getting Started:**
1. **Prerequisite:** A Microsoft Admin must [approve the Langdock application](/administration/microsoft-admin-approval) in your Microsoft workspace once.
2. Enable the Microsoft Planner integration in your workspace settings
3. Authenticate using OAuth
4. Test the connection with a simple read operation
5. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Microsoft Planner integration, contact [support@langdock.com](mailto:support@langdock.com)
# Power BI
Source: https://docs.langdock.com/administration/integrations/power-bi
Microsoft Power BI REST API integration for datasets, reports, and workspaces
## Overview
Microsoft Power BI REST API integration for datasets, reports, and workspaces. Through Langdock's integration, you can access and manage Power BI directly from your conversations.
**Authentication:** OAuth\
**Category:** Data & Analytics\
**Availability:** All workspace plans
## Available Actions
### List Workspaces
##### `powerbi.list_workspaces`
List workspaces (groups) the user has access to.
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns a JSON array of workspaces with workspace metadata
***
### List Datasets
##### `powerbi.list_datasets`
List datasets in My Workspace or a specified workspace.
**Requires Confirmation:** No
**Parameters:**
* `groupId` (TEXT, Optional): Workspace (Group ID). If omitted, lists datasets from My Workspace
**Output:** Returns a JSON array of datasets with dataset metadata
***
### List Dataset Tables
##### `powerbi.list_dataset_tables`
List tables for a dataset to discover exact table and column names before writing DAX. Note: Only works for push API datasets.
**Requires Confirmation:** No
**Parameters:**
* `datasetId` (TEXT, Required): The Power BI dataset ID (GUID)
**Output:** Returns a JSON object with table definitions including columns and measures
***
### Execute DAX Query
##### `powerbi.execute_dax_query`
Run a DAX (SQL-like) query against a Power BI dataset using the Execute Queries API.
**Requires Confirmation:** Yes
**Parameters:**
* `datasetId` (TEXT, Required): The Power BI dataset ID to query (GUID)
* `query` (MULTI\_LINE\_TEXT, Required): DAX query text. Example: `EVALUATE ROW("Total Sales", [Total Sales])` or full DAX EVALUATE expression
**Output:** Returns query results with table data from the executed DAX query
***
### Search Datasets
##### `powerbi.search_datasets`
Searches for Power BI datasets by name.
**Requires Confirmation:** No
**Parameters:**
* `searchTerm` (TEXT, Required): The term to search for in dataset names
* `groupId` (TEXT, Optional): Workspace (Group ID). If omitted, searches My Workspace
**Output:** Returns matching datasets with id, name, configuredBy, isRefreshable, createdDate, webUrl, and total counts
***
### List Reports
##### `powerbi.list_reports`
List reports in a workspace or in My Workspace.
**Requires Confirmation:** No
**Parameters:**
* `groupId` (TEXT, Optional): Workspace (Group ID). If omitted, lists reports from My Workspace
**Output:** Returns a JSON array of reports with report metadata
***
### List Report Pages
##### `powerbi.list_report_pages`
List pages for a report.
**Requires Confirmation:** No
**Parameters:**
* `reportId` (TEXT, Required): The report ID (GUID)
* `groupId` (TEXT, Optional): Workspace (Group ID) if the report is in a workspace
**Output:** Returns a JSON array of report pages with page details
***
### Search Reports
##### `powerbi.search_reports`
Searches for Power BI reports by name or description.
**Requires Confirmation:** No
**Parameters:**
* `searchTerm` (TEXT, Required): The term to search for in report names and descriptions
* `groupId` (TEXT, Optional): Workspace (Group ID). If omitted, searches My Workspace
**Output:** Returns matching reports with id, name, description, webUrl, embedUrl, datasetId, and timestamps
***
### Export Report to File (Initiate)
##### `powerbi.export_report_to_file_initiate`
Initiates a Power BI report export job. Returns export ID for use with the fetch action.
**Requires Confirmation:** Yes
**Parameters:**
* `reportId` (TEXT, Required): The report ID (GUID)
* `format` (SELECT, Required): Export format (PDF, PPTX, PNG)
* `groupId` (TEXT, Optional): Workspace (Group ID) that contains the report. If omitted, exports from My Workspace
**Output:** Returns export job details with exportId, groupId, reportId, and format
***
### Export Paginated Report (Initiate)
##### `powerbi.export_paginated_report_initiate`
Initiates a Power BI paginated report export job. Supports more formats than regular reports including XLSX, CSV, DOCX.
**Requires Confirmation:** Yes
**Parameters:**
* `reportId` (TEXT, Required): The paginated report ID (GUID)
* `format` (SELECT, Required): Export format (PDF, XLSX, DOCX, CSV)
* `groupId` (TEXT, Optional): Workspace (Group ID) that contains the report
* `reportParameters` (OBJECT, Optional): Parameters to pass to the report as key-value pairs (JSON object). Example: `{"StartDate": "2024-01-01", "EndDate": "2024-12-31"}`
* `csvDelimiter` (TEXT, Optional): Delimiter for CSV export (default is comma, max 1 character)
* `csvEncoding` (SELECT, Optional): Encoding for CSV export (UTF-8, UTF-16)
* `imageFormat` (SELECT, Optional): When format is IMAGE, specify the image type (JPEG, PNG)
**Output:** Returns export job initiation response with export ID and configuration
***
### Export Report to File (Fetch)
##### `powerbi.export_report_to_file_fetch`
Fetches a Power BI report export by checking status and downloading the file if ready.
**Requires Confirmation:** No
**Parameters:**
* `reportId` (TEXT, Required): The report ID (GUID)
* `exportId` (TEXT, Required): The export ID returned from the initiate action
* `groupId` (TEXT, Optional): Workspace (Group ID) that contains the report. If omitted, fetches from My Workspace
**Output:** Returns export file object or status information
***
### List Dashboards
##### `powerbi.list_dashboards`
List dashboards in a workspace or in My Workspace.
**Requires Confirmation:** No
**Parameters:**
* `groupId` (TEXT, Optional): Workspace (Group ID). If omitted, lists dashboards from My Workspace
**Output:** Returns a JSON array of dashboards with dashboard metadata
***
### List Dashboard Tiles
##### `powerbi.list_dashboard_tiles`
List tiles on a dashboard.
**Requires Confirmation:** No
**Parameters:**
* `dashboardId` (TEXT, Required): The dashboard ID (GUID)
* `groupId` (TEXT, Optional): Workspace (Group ID) if the dashboard is in a workspace
**Output:** Returns a JSON array of dashboard tiles with tile metadata
***
### Get Dataset Details
##### `powerbi.get_dataset_details`
Gets detailed information about a specific Power BI dataset.
**Requires Confirmation:** No
**Parameters:**
* `datasetId` (TEXT, Required): The dataset ID (GUID)
* `groupId` (TEXT, Optional): Workspace (Group ID). If omitted, uses My Workspace
**Output:** Returns detailed dataset information including id, name, configuredBy, isRefreshable, targetStorageMode, createdDate, webUrl, and more
***
### Get Report Details
##### `powerbi.get_report_details`
Gets detailed information about a specific Power BI report.
**Requires Confirmation:** No
**Parameters:**
* `reportId` (TEXT, Required): The report ID (GUID)
* `groupId` (TEXT, Optional): Workspace (Group ID). If omitted, uses My Workspace
**Output:** Returns detailed report metadata object
***
### Get Dashboard Details
##### `powerbi.get_dashboard_details`
Gets detailed information about a specific Power BI dashboard.
**Requires Confirmation:** No
**Parameters:**
* `dashboardId` (TEXT, Required): The dashboard ID (GUID)
* `groupId` (TEXT, Optional): Workspace (Group ID). If omitted, uses My Workspace
**Output:** Returns detailed dashboard metadata object
***
### Get Tile Details
##### `powerbi.get_tile_details`
Gets detailed information about a specific Power BI dashboard tile.
**Requires Confirmation:** No
**Parameters:**
* `dashboardId` (TEXT, Required): The dashboard ID (GUID)
* `tileId` (TEXT, Required): The tile ID (GUID)
* `groupId` (TEXT, Optional): Workspace (Group ID). If omitted, uses My Workspace
**Output:** Returns detailed tile metadata object
***
### Get Table Sample
##### `powerbi.get_table_sample`
Gets a sample of data from a specific table in a Power BI dataset.
**Requires Confirmation:** No
**Parameters:**
* `datasetId` (TEXT, Required): The dataset ID (GUID)
* `tableName` (TEXT, Required): The name of the table to sample (e.g., "Sales")
* `sampleSize` (TEXT, Optional): Number of rows to sample (default: 100)
**Output:** Returns table sample with columns, sampleData rows, rowCount, columnCount, and metadata
***
### Get Table Count
##### `powerbi.get_table_count`
Gets the row count of a specific table in a Power BI dataset.
**Requires Confirmation:** No
**Parameters:**
* `datasetId` (TEXT, Required): The dataset ID (GUID)
* `tableName` (TEXT, Required): The name of the table to count (e.g., "Sales")
**Output:** Returns the row count for the specified table
***
### Get Dataset Schema
##### `powerbi.get_dataset_schema`
Gets the complete schema of a Power BI dataset including tables, columns, measures, and relationships.
**Requires Confirmation:** No
**Parameters:**
* `datasetId` (TEXT, Required): The dataset ID (GUID)
* `groupId` (TEXT, Optional): Workspace (Group ID). If omitted, uses My Workspace
**Output:** Returns comprehensive schema with dataset metadata, tables with columns and measures, parameters, and summary statistics
***
### Get Dataset Dependencies
##### `powerbi.get_dataset_dependencies`
Gets reports and dashboards that use a specific Power BI dataset.
**Requires Confirmation:** No
**Parameters:**
* `datasetId` (TEXT, Required): The dataset ID (GUID)
* `groupId` (TEXT, Optional): Workspace (Group ID). If omitted, uses My Workspace
**Output:** Returns list of objects (reports and dashboards) that reference the dataset
***
## Common Use Cases
Manage and organize your Power BI data
Automate workflows with Power BI
Generate insights and reports
Connect Power BI with other tools
## Best Practices
**Getting Started:**
1. Enable the Power BI integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Power BI integration, contact [support@langdock.com](mailto:support@langdock.com)
# Pylon
Source: https://docs.langdock.com/administration/integrations/pylon
Pylon is a B2B customer support platform that helps teams manage support tickets across multiple channels including Slack, Email, Microsoft Teams, and more
## Overview
Pylon is a B2B customer support platform that helps teams manage support tickets across multiple channels including Slack, Email, Microsoft Teams, and more. Through Langdock's integration, you can access and manage Pylon directly from your conversations.
**Authentication:** API Key\
**Category:** CRM & Customer Support\
**Availability:** All workspace plans
## Available Actions
### Get Issue
##### `pylon.getIssue`
Retrieve details of a specific issue by ID or ticket number
**Requires Confirmation:** No
**Parameters:**
* `issueIdOrTicketNumber` (TEXT, Required): The unique identifier or ticket number of the issue
**Output:** Returns the issue details with cleaned HTML content
***
### Search Issues
##### `pylon.searchIssues`
Search for issues with advanced filtering capabilities
**Requires Confirmation:** No
**Parameters:**
* `filterField` (SELECT, Optional): The field to filter by (created\_at, account\_id, ticket\_form\_id, requester\_id, follower\_user\_id, follower\_contact\_id, state, tags, title, body\_html, or custom field slug)
* `filterOperator` (SELECT, Optional): The operator to use for filtering (equals, in, not\_in, contains, does\_not\_contain, string\_contains, string\_does\_not\_contain, time\_is\_after, time\_is\_before, time\_range)
* `filterValue` (TEXT, Optional): The value to filter by (for single value operators like equals, string\_contains, time\_is\_after, time\_is\_before)
* `filterValues` (TEXT, Optional): JSON array of values for operators: in, not\_in, contains, does\_not\_contain
* `startTime` (TEXT, Optional): Start time for time\_range operator (RFC3339 format)
* `endTime` (TEXT, Optional): End time for time\_range operator (RFC3339 format)
* `cursor` (TEXT, Optional): Cursor for pagination
* `limit` (NUMBER, Optional): Number of issues to fetch (1-1000, default 100)
**Output:** Returns a list of issues matching the search criteria
***
### Create Issue
##### `pylon.createIssue`
Create a new issue
**Requires Confirmation:** Yes
**Parameters:**
* `title` (TEXT, Required): The title of the issue
* `bodyHtml` (MULTI\_LINE\_TEXT, Optional): The HTML content of the issue. Use HTML tags `(e.g.,
, ,
)`. Markdown syntax will NOT be rendered
* `accountId` (TEXT, Optional): The ID of the account associated with this issue
* `assigneeId` (TEXT, Optional): The ID of the user to assign this issue to
* `teamId` (TEXT, Optional): The ID of the team to assign this issue to
* `priority` (SELECT, Optional): The priority of the issue (low, medium, high, urgent)
* `requesterEmail` (TEXT, Optional): Email of the person requesting this issue
* `requesterId` (TEXT, Optional): ID of the requester
* `requesterName` (TEXT, Optional): Name of the requester
* `requesterAvatarUrl` (TEXT, Optional): URL to the requester's avatar image
* `tags` (TEXT, Optional): Tags for the issue (JSON array)
* `customFields` (TEXT, Optional): Custom fields (JSON array)
* `attachmentUrls` (TEXT, Optional): URLs to attachments (JSON array)
* `destination` (TEXT, Optional): Destination for the issue (e.g., email, slack)
* `destinationEmail` (TEXT, Optional): Email address for the destination
* `emailCcs` (TEXT, Optional): CC email addresses (JSON array)
* `emailBccs` (TEXT, Optional): BCC email addresses (JSON array)
* `createdAt` (TEXT, Optional): Timestamp when issue was created (RFC3339 format)
* `userId` (TEXT, Optional): ID of the user creating the issue
* `contactId` (TEXT, Optional): ID of the contact associated with the issue
**Output:** Returns the created issue details
***
### List Issues
##### `pylon.listIssues`
Get a list of issues
**Requires Confirmation:** No
**Parameters:**
* `startTime` (TEXT, Optional): Start time for filtering issues (RFC3339 format)
* `endTime` (TEXT, Optional): End time for filtering issues (RFC3339 format)
**Output:** Returns a list of issues
***
### Delete Issue
##### `pylon.deleteIssue`
Delete an existing issue
**Requires Confirmation:** Yes
**Parameters:**
* `issueId` (TEXT, Required): The ID of the issue to delete
**Output:** Confirmation of deletion
***
### Update Issue
##### `pylon.updateIssue`
Update an existing issue
**Requires Confirmation:** Yes
**Parameters:**
* `issueId` (TEXT, Required): The ID of the issue to update
* `assigneeId` (TEXT, Optional): The ID of the user to assign (empty string removes assignee)
* `teamId` (TEXT, Optional): The ID of the team to assign (empty string removes team)
* `state` (SELECT, Optional): The state of the issue (new, in\_progress, waiting, closed)
* `requesterId` (TEXT, Optional): ID of the requester to update
* `customerPortalVisible` (SELECT, Optional): Whether the issue is visible in the customer portal (true, false)
* `tags` (TEXT, Optional): Updated tags for the issue (JSON array)
* `customFields` (TEXT, Optional): Updated custom fields (JSON array)
**Output:** Returns the updated issue details
***
### Add Draft Reply to Issue
##### `pylon.addDraftReplytoIssue`
Add a draft reply to an existing issue
**Requires Confirmation:** Yes
**Parameters:**
* `issueId` (TEXT, Required): The unique identifier of the issue
* `draftResponse` (MULTI\_LINE\_TEXT, Required): The draft reply content to add to the issue
**Output:** Confirmation of draft reply addition
***
### Add Note to Issue
##### `pylon.addNotetoIssue`
Add an internal note to an issue. IMPORTANT: Use HTML formatting, NOT Markdown. Markdown will be stripped and appear as plain text
**Requires Confirmation:** Yes
**Parameters:**
* `issueId` (TEXT, Required): The unique identifier of the issue
* `content` (MULTI\_LINE\_TEXT, Required): The HTML content of the note. Use HTML tags like `
, ,
,
`. DO NOT use Markdown syntax like **bold** or - lists as it will be stripped
**Output:** Confirmation of note addition
***
### Snooze Issue
##### `pylon.snoozeIssue`
Snooze an issue until a specific time
**Requires Confirmation:** Yes
**Parameters:**
* `issueId` (TEXT, Required): The ID or number of the issue to snooze
* `snoozeUntil` (TEXT, Required): Timestamp to snooze the issue until (RFC3339 format)
**Output:** Confirmation of snooze action
***
### Get Issue Followers
##### `pylon.getIssueFollowers`
Get a list of followers for an issue
**Requires Confirmation:** No
**Parameters:**
* `issueId` (TEXT, Required): The ID or number of the issue
**Output:** Returns a list of followers
***
### Get Issue Messages
##### `pylon.getIssueMessages`
Retrieve all messages for a specific issue
**Requires Confirmation:** No
**Parameters:**
* `issueId` (TEXT, Required): The unique identifier of the issue
**Output:** Returns all messages for the issue
***
### Manage Issue Followers
##### `pylon.manageIssueFollowers`
Add or remove followers from an issue
**Requires Confirmation:** Yes
**Parameters:**
* `issueId` (TEXT, Required): The ID or number of the issue
* `operation` (SELECT, Optional): Whether to add or remove followers (add, remove)
* `userIds` (TEXT, Optional): User IDs to add/remove as followers (JSON array)
* `contactIds` (TEXT, Optional): Contact IDs to add/remove as followers (JSON array)
**Output:** Confirmation of follower management
***
### Manage External Issues
##### `pylon.manageExternalIssues`
Link or unlink external issues
**Requires Confirmation:** Yes
**Parameters:**
* `issueId` (TEXT, Required): The ID of the Pylon issue
* `operation` (SELECT, Required): Whether to link or unlink external issues (link, unlink)
* `source` (TEXT, Required): The source system (e.g., linear, jira)
* `externalId` (TEXT, Required): The ID of the external issue
* `link` (TEXT, Optional): URL link to the external issue
**Output:** Confirmation of external issue management
***
### Redact Message
##### `pylon.redactMessage`
Redact a message in an issue
**Requires Confirmation:** Yes
**Parameters:**
* `messageId` (TEXT, Required): The ID of the message to redact
**Output:** Confirmation of message redaction
***
### List Accounts
##### `pylon.listAccounts`
Get a paginated list of accounts
**Requires Confirmation:** No
**Parameters:**
* `cursor` (TEXT, Optional): The cursor to use for pagination
* `limit` (NUMBER, Optional): Number of accounts to fetch (1-1000, default: 100)
**Output:** Returns a list of accounts
***
### Create Account
##### `pylon.createAccount`
Create a new account
**Requires Confirmation:** Yes
**Parameters:**
* `name` (TEXT, Required): The name of the account
* `domains` (TEXT, Optional): List of domains (comma-separated, e.g., acme.com, acme.org)
* `primaryDomain` (TEXT, Optional): Primary domain (must be in domains list)
* `logoUrl` (TEXT, Optional): URL to account logo (square PNG, JPG, or JPEG)
* `tags` (TEXT, Optional): Tags for the account (JSON array, e.g., \['enterprise', 'priority'])
* `externalIds` (TEXT, Optional): External IDs (JSON array of objects, e.g., \[external\_id': '123', 'label': 'CRM ID])
* `customFields` (TEXT, Optional): Custom fields (JSON array, e.g., \[slug': 'industry', 'value': 'Technology])
* `channels` (TEXT, Optional): Channels to link (JSON array, e.g., \[channel\_id': 'ch123', 'source': 'slack', 'is\_primary': true}])
**Output:** Returns the created account details
***
### Get Account
##### `pylon.getAccount`
Get an account by its ID or external ID
**Requires Confirmation:** No
**Parameters:**
* `accountId` (TEXT, Required): The ID or external ID of the account
**Output:** Returns the account details
***
### Update Account
##### `pylon.updateAccount`
Update an existing account
**Requires Confirmation:** Yes
**Parameters:**
* `accountId` (TEXT, Required): The ID or external ID of the account to update
* `name` (TEXT, Optional): New name for the account
* `domains` (TEXT, Optional): Updated list of domains (comma-separated)
* `primaryDomain` (TEXT, Optional): Primary domain (must be in domains list)
* `ownerId` (TEXT, Optional): ID of the new owner
* `tags` (TEXT, Optional): Updated tags (JSON array)
* `externalIds` (TEXT, Optional): Updated external IDs (JSON array)
* `customFields` (TEXT, Optional): Updated custom fields (JSON array)
* `channels` (TEXT, Optional): Updated channels (JSON array)
* `keepExistingPrimaryDomain` (SELECT, Optional): If updating domains but not primary, set to true (true, false)
**Output:** Returns the updated account details
***
### Delete Account
##### `pylon.deleteAccount`
Delete an existing account
**Requires Confirmation:** Yes
**Parameters:**
* `accountId` (TEXT, Required): The ID or external ID of the account to delete
**Output:** Confirmation of deletion
***
### Search Accounts
##### `pylon.searchAccounts`
Search for accounts using field-specific operators. IMPORTANT: Each field has specific allowed operators - domains/tags: contains, does\_not\_contain, in, not\_in | name/external\_ids: equals, in, not\_in. Use filterValue for single values (equals, contains, does\_not\_contain), filterValues for arrays (in, not\_in)
**Requires Confirmation:** No
**Parameters:**
* `filterField` (SELECT, Optional): Field to filter by. Each field supports different operators - check operator compatibility (domains, tags, name, external\_ids)
* `filterOperator` (SELECT, Optional): Operator for filtering. MUST be compatible with selected field (see field descriptions) (equals, in, not\_in, contains, does\_not\_contain)
* `filterValue` (TEXT, Optional): Use for SINGLE value operators: equals, contains, does\_not\_contain. Leave empty when using filterValues
* `filterValues` (TEXT, Optional): Use ONLY for ARRAY operators: in, not\_in. Must be JSON array format. Leave empty when using filterValue
* `cursor` (TEXT, Optional): The cursor for pagination
* `limit` (NUMBER, Optional): Number of results (1-1000)
**Output:** Returns a list of accounts matching the search criteria
***
### Create Account Activity
##### `pylon.createAccountActivity`
Create a new activity for an account
**Requires Confirmation:** Yes
**Parameters:**
* `accountId` (TEXT, Required): The ID of the account to create the activity for
* `slug` (TEXT, Required): The slug of the activity type. Get valid slugs from 'Get Activity Types' action
* `bodyHtml` (MULTI\_LINE\_TEXT, Optional): Optional HTML content to display in the activity
* `contactId` (TEXT, Optional): Optional contact ID of the actor
* `userId` (TEXT, Optional): Optional user ID of the actor
* `happenedAt` (TEXT, Optional): Timestamp (RFC3339) when activity happened (defaults to now)
* `link` (TEXT, Optional): Optional link to add to the activity
* `linkText` (TEXT, Optional): Link text to display (defaults to 'Open link')
**Output:** Returns the created activity details
***
### Get Activity Types
##### `pylon.getActivityTypes`
Get custom activity types configured in your Pylon instance
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns a list of activity types
***
### Create Account Highlight
##### `pylon.createAccountHighlight`
Create a new highlight for an account
**Requires Confirmation:** Yes
**Parameters:**
* `accountId` (TEXT, Required): The ID of the account to create the highlight for
* `contentHtml` (MULTI\_LINE\_TEXT, Required): The HTML content for this highlight
* `expiresAt` (TEXT, Optional): Optional RFC3339 timestamp when highlight expires
**Output:** Returns the created highlight details
***
### Update Account Highlight
##### `pylon.updateAccountHighlight`
Update an existing account highlight
**Requires Confirmation:** Yes
**Parameters:**
* `accountId` (TEXT, Required): The ID of the account that the highlight belongs to
* `highlightId` (TEXT, Required): The ID of the highlight to update
* `contentHtml` (MULTI\_LINE\_TEXT, Optional): The updated HTML content for this highlight
* `expiresAt` (TEXT, Optional): Updated expiration timestamp (RFC3339)
**Output:** Returns the updated highlight details
***
### Delete Account Highlight
##### `pylon.deleteAccountHighlight`
Delete an existing account highlight
**Requires Confirmation:** Yes
**Parameters:**
* `accountId` (TEXT, Required): The ID of the account that the highlight belongs to
* `highlightId` (TEXT, Required): The ID of the highlight to delete
**Output:** Confirmation of deletion
***
### List Custom Fields
##### `pylon.listCustomFields`
Get all custom fields configured in Pylon. REQUIRED: objectType must be one of: account, issue, or contact
**Requires Confirmation:** No
**Parameters:**
* `objectType` (SELECT, Required): Select the object type (account, issue, or contact)
**Output:** Returns a list of custom fields
***
### Get Custom Field
##### `pylon.getCustomField`
Get a custom field by its ID
**Requires Confirmation:** No
**Parameters:**
* `customFieldId` (TEXT, Required): The ID of the custom field
**Output:** Returns the specific custom field details
***
### Create Custom Field
##### `pylon.createCustomField`
Create a new custom field
**Requires Confirmation:** Yes
**Parameters:**
* `slug` (TEXT, Required): Unique identifier for the custom field
* `label` (TEXT, Required): Display label for the custom field
* `type` (SELECT, Required): The type of the custom field (text, number, decimal, boolean, date, datetime, user, url, select, multiselect)
* `objectType` (SELECT, Required): The object type this field applies to (account, issue, contact)
* `description` (TEXT, Optional): Description of the custom field
* `defaultValue` (TEXT, Optional): Default value for single-valued fields
* `defaultValues` (TEXT, Optional): Default values for multi-valued fields (JSON array)
* `selectOptions` (TEXT, Optional): Options for select/multiselect fields (JSON array)
**Output:** Returns the created custom field details
***
### Update Custom Field
##### `pylon.updateCustomField`
Update an existing custom field
**Requires Confirmation:** Yes
**Parameters:**
* `customFieldId` (TEXT, Required): The ID of the custom field to update
* `slug` (TEXT, Optional): Updated slug for the custom field
* `label` (TEXT, Optional): Updated label for the custom field
* `description` (TEXT, Optional): Updated description
* `defaultValue` (TEXT, Optional): Updated default value for single-valued fields
* `defaultValues` (TEXT, Optional): Updated default values for multi-valued fields (JSON array)
* `selectOptions` (TEXT, Optional): Updated options for select/multiselect fields (JSON array)
**Output:** Returns the updated custom field details
***
### List Users
##### `pylon.listUsers`
Get a list of all users
**Requires Confirmation:** No
**Parameters:**
* `cursor` (TEXT, Optional): Pagination cursor from previous request
* `limit` (NUMBER, Optional): Number of users to fetch (max 1000)
**Output:** Returns a list of users
***
### Get User
##### `pylon.getUser`
Get a user by ID
**Requires Confirmation:** No
**Parameters:**
* `userId` (TEXT, Required): The ID of the user to fetch
**Output:** Returns the user details
***
### Update User
##### `pylon.updateUser`
Update a user's role or status
**Requires Confirmation:** Yes
**Parameters:**
* `userId` (TEXT, Required): The ID of the user to update
* `roleId` (TEXT, Optional): The new role ID for the user
* `status` (SELECT, Optional): User status (active, away, or out\_of\_office)
**Output:** Returns the updated user details
***
### Search Users
##### `pylon.searchUsers`
Search for users by email
**Requires Confirmation:** No
**Parameters:**
* `filterField` (SELECT, Optional): Field to filter by (currently only 'email' is supported)
* `filterOperator` (SELECT, Optional): Operator for the filter (equals, in, not\_in)
* `filterValue` (TEXT, Optional): Value for equals operator
* `filterValues` (TEXT, Optional): JSON array of values for in/not\_in operators
* `cursor` (TEXT, Optional): Pagination cursor from previous request
* `limit` (NUMBER, Optional): Number of users to fetch (max 1000)
**Output:** Returns a list of users matching the search criteria
***
### List Knowledge Bases
##### `pylon.listKnowledgeBases`
Get a list of all knowledge bases
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns a list of knowledge bases
***
### Get Knowledge Base
##### `pylon.getKnowledgeBase`
Get details of a specific knowledge base
**Requires Confirmation:** No
**Parameters:**
* `knowledgeBaseId` (TEXT, Required): The ID of the knowledge base
**Output:** Returns the knowledge base details
***
### Create KB Article
##### `pylon.createKBArticle`
Create a new knowledge base article. IMPORTANT: The contentHtml field is mapped to body\_html in the API
**Requires Confirmation:** Yes
**Parameters:**
* `knowledgeBaseId` (TEXT, Required): The ID of the knowledge base. Get valid IDs from the 'List Knowledge Bases' action
* `title` (TEXT, Required): The title of the article
* `contentHtml` (MULTI\_LINE\_TEXT, Required): The HTML content of the article (maps to body\_html in API). Use proper HTML tags. Markdown will NOT be rendered
* `collectionId` (TEXT, Optional): The ID of the collection to add the article to
* `slug` (TEXT, Optional): URL slug for the article
* `authorUserId` (TEXT, Optional): The ID of the user to set as the article author. Get valid user IDs from the 'List Users' action
* `publishedAt` (TEXT, Optional): Publication timestamp (RFC3339)
* `isPublished` (SELECT, Optional): Whether the article should be published immediately (true, false)
* `isUnlisted` (SELECT, Optional): Whether the article should be unlisted (accessible only via direct link) (true, false)
**Output:** Returns the created article details
***
### Update KB Article
##### `pylon.updateKBArticle`
Update an existing knowledge base article. IMPORTANT: The contentHtml field is mapped to body\_html in the API
**Requires Confirmation:** Yes
**Parameters:**
* `articleId` (TEXT, Required): The ID of the article to update
* `title` (TEXT, Optional): Updated title of the article
* `contentHtml` (MULTI\_LINE\_TEXT, Optional): Updated HTML content (maps to body\_html in API). Use proper HTML tags. Markdown will NOT be rendered
* `slug` (TEXT, Optional): Updated URL slug
* `publishedAt` (TEXT, Optional): Updated publication timestamp (RFC3339)
**Output:** Returns the updated article details
***
### List KB Collections
##### `pylon.listKBCollections`
Get a list of knowledge base collections
**Requires Confirmation:** No
**Parameters:**
* `knowledgeBaseId` (TEXT, Required): The ID of the knowledge base
**Output:** Returns a list of collections
***
### Create KB Collection
##### `pylon.createKBCollection`
Create a new knowledge base collection
**Requires Confirmation:** Yes
**Parameters:**
* `knowledgeBaseId` (TEXT, Required): The ID of the knowledge base
* `title` (TEXT, Required): The title of the collection
* `description` (TEXT, Optional): Description of the collection
* `slug` (TEXT, Optional): URL slug for the collection
**Output:** Returns the created collection details
***
### Create KB Route Redirect
##### `pylon.createKBRouteRedirect`
Create a route redirect in the knowledge base
**Requires Confirmation:** Yes
**Parameters:**
* `knowledgeBaseId` (TEXT, Required): The ID of the knowledge base
* `fromPath` (TEXT, Required): The path to redirect from
* `toPath` (TEXT, Required): The path to redirect to
**Output:** Returns the created redirect details
***
### List Teams
##### `pylon.listTeams`
Get a list of all teams
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns a list of teams
***
### Get Team
##### `pylon.getTeam`
Get details of a specific team
**Requires Confirmation:** No
**Parameters:**
* `teamId` (TEXT, Required): The ID of the team
**Output:** Returns the team details
***
### Create Team
##### `pylon.createTeam`
Create a new team
**Requires Confirmation:** Yes
**Parameters:**
* `name` (TEXT, Required): The name of the team
* `description` (TEXT, Optional): Description of the team
* `slackChannelId` (TEXT, Optional): Associated Slack channel ID
**Output:** Returns the created team details
***
### Update Team
##### `pylon.updateTeam`
Update an existing team
**Requires Confirmation:** Yes
**Parameters:**
* `teamId` (TEXT, Required): The ID of the team to update
* `name` (TEXT, Optional): Updated name of the team
* `description` (TEXT, Optional): Updated description
* `slackChannelId` (TEXT, Optional): Updated Slack channel ID
**Output:** Returns the updated team details
***
## Common Use Cases
Manage and organize your Pylon data
Automate workflows with Pylon
Generate insights and reports
Connect Pylon with other tools
## Best Practices
**Getting Started:**
1. Enable the Pylon integration in your workspace settings
2. Authenticate using API Key
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your API Key credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Pylon integration, contact [support@langdock.com](mailto:support@langdock.com)
# Qdrant
Source: https://docs.langdock.com/administration/integrations/qdrant
Vector similarity search engine and vector database
## Overview
Vector similarity search engine and vector database. Through Langdock's integration, you can access and manage Qdrant directly from your conversations.
**Authentication:** API Key\
**Category:** AI & Search\
**Availability:** All workspace plans
## Available Actions
### Search Collection
##### `qdrant.searchCollection`
Searches the database for the most relevant information based on the query provided
**Requires Confirmation:** No
**Parameters:**
* `query` (VECTOR, Required): The vector query to search for
**Output:** Returns search results with similarity scores and payload data
***
## Common Use Cases
Manage and organize your Qdrant data
Automate workflows with Qdrant
Generate insights and reports
Connect Qdrant with other tools
## Best Practices
**Getting Started:**
1. Enable the Qdrant integration in your workspace settings
2. Authenticate using API Key
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your API Key credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Qdrant integration, contact [support@langdock.com](mailto:support@langdock.com)
# Salesforce
Source: https://docs.langdock.com/administration/integrations/salesforce
Cloud-based CRM platform that unifies sales, service, marketing, and commerce
## Overview
Cloud-based CRM platform that unifies sales, service, marketing, and commerce. Through Langdock's integration, you can access and manage Salesforce directly from your conversations.
**Authentication:** OAuth\
**Category:** CRM & Customer Support\
**Availability:** All workspace plans
## Available Actions
### Get account
##### `salesforce.getaccount`
Gets an account by id
**Requires Confirmation:** No
**Parameters:**
* `accountId` (TEXT, Required): Id of the account in Salesforce. Starts with 001
**Output:** Returns the account details
***
### Get campaign
##### `salesforce.getcampaign`
Gets a campaign by id
**Requires Confirmation:** No
**Parameters:**
* `campaignId` (TEXT, Required): Id of the campaign in Salesforce
**Output:** Returns the campaign details
***
### Get campaign member
##### `salesforce.getcampaignmember`
Gets a campaign member by id
**Requires Confirmation:** No
**Parameters:**
* `campaignMemberId` (TEXT, Required): Id of the campaign member to retrieve
**Output:** Returns the campaign member details
***
### Get campaign members for campaign
##### `salesforce.getcampaignmembersforcampaign`
Finds all campaign members by campaign id
**Requires Confirmation:** No
**Parameters:**
* `campaignId` (TEXT, Required): Campaign Id
* `fields` (TEXT, Optional): Comma-separated list of field API names to return. Defaults to 'Id,Name,CreatedDate,LastModifiedDate,OwnerId,CampaignId,ContactId,LeadId,Status'. Common fields include: Name, CampaignId, ContactId, LeadId, Status
**Output:** Returns a list of campaign members
***
### Get case
##### `salesforce.getcase`
Gets a case by id
**Requires Confirmation:** No
**Parameters:**
* `caseId` (TEXT, Required): Id of the case in Salesforce. Starts with 500
**Output:** Returns the case details
***
### Get contact
##### `salesforce.getcontact`
Gets a contact by id
**Requires Confirmation:** No
**Parameters:**
* `contactId` (TEXT, Required): Id of the contact. starts with 003
**Output:** Returns the contact details
***
### Get content note
##### `salesforce.getcontentnote`
Gets an Enhanced Note (ContentNote) and optionally its ContentDocumentLink associations
**Requires Confirmation:** No
**Parameters:**
* `includeLinks` (BOOLEAN, Optional): If true, also returns ContentDocumentLink records for this note
* `contentNoteId` (TEXT, Required): Id of the content note
**Output:** Returns the content note details
***
### Get lead
##### `salesforce.getlead`
Gets a lead by id
**Requires Confirmation:** No
**Parameters:**
* `leadId` (TEXT, Required): Id of the lead. starts with 00Q
**Output:** Returns the lead details
***
### Get opportunity
##### `salesforce.getopportunity`
Gets an opportunity by id
**Requires Confirmation:** No
**Parameters:**
* `opportunityId` (TEXT, Required): Id of the opportunity. starts with 006
**Output:** Returns the opportunity details
***
### Get task
##### `salesforce.gettask`
Gets a task by id
**Requires Confirmation:** No
**Parameters:**
* `taskId` (TEXT, Required): Id of the task. starts with 00T
**Output:** Returns the task details
***
### Get user
##### `salesforce.getuser`
Gets a user by id
**Requires Confirmation:** No
**Parameters:**
* `userId` (TEXT, Required): Id of the user. starts with 005
**Output:** Returns the user details
***
### Find account
##### `salesforce.findaccount`
Finds an account by name
**Requires Confirmation:** No
**Parameters:**
* `accountName` (TEXT, Required): Name of the account to search for
**Output:** Returns matching accounts
***
### Find campaign
##### `salesforce.findcampaign`
Finds a campaign by name
**Requires Confirmation:** No
**Parameters:**
* `campaignName` (TEXT, Required): Name of the campaign to search for
**Output:** Returns matching campaigns
***
### Find case
##### `salesforce.findcase`
Finds a case by case number
**Requires Confirmation:** No
**Parameters:**
* `caseNumber` (TEXT, Required): Case number to search for
**Output:** Returns matching cases
***
### Find contact
##### `salesforce.findcontact`
Finds a contact by email
**Requires Confirmation:** No
**Parameters:**
* `email` (TEXT, Required): Email address to search for
**Output:** Returns matching contacts
***
### Find lead
##### `salesforce.findlead`
Finds a lead by email
**Requires Confirmation:** No
**Parameters:**
* `email` (TEXT, Required): Email address to search for
**Output:** Returns matching leads
***
### Find opportunity
##### `salesforce.findopportunity`
Finds an opportunity by name
**Requires Confirmation:** No
**Parameters:**
* `opportunityName` (TEXT, Required): Name of the opportunity to search for
**Output:** Returns matching opportunities
***
### Find task
##### `salesforce.findtask`
Finds a task by subject
**Requires Confirmation:** No
**Parameters:**
* `subject` (TEXT, Required): Subject of the task to search for
**Output:** Returns matching tasks
***
### Find user
##### `salesforce.finduser`
Finds a user by email
**Requires Confirmation:** No
**Parameters:**
* `email` (TEXT, Required): Email address to search for
**Output:** Returns matching users
***
### Create account
##### `salesforce.createaccount`
Creates a new account
**Requires Confirmation:** Yes
**Parameters:**
* `name` (TEXT, Required): Name of the account
* `type` (TEXT, Optional): Type of account (e.g., Customer, Partner, Competitor)
* `industry` (TEXT, Optional): Industry of the account
* `phone` (TEXT, Optional): Phone number
* `website` (TEXT, Optional): Website URL
* `billingStreet` (TEXT, Optional): Billing street address
* `billingCity` (TEXT, Optional): Billing city
* `billingState` (TEXT, Optional): Billing state
* `billingPostalCode` (TEXT, Optional): Billing postal code
* `billingCountry` (TEXT, Optional): Billing country
* `shippingStreet` (TEXT, Optional): Shipping street address
* `shippingCity` (TEXT, Optional): Shipping city
* `shippingState` (TEXT, Optional): Shipping state
* `shippingPostalCode` (TEXT, Optional): Shipping postal code
* `shippingCountry` (TEXT, Optional): Shipping country
* `description` (TEXT, Optional): Description of the account
**Output:** Returns the created account details
***
### Create campaign
##### `salesforce.createcampaign`
Creates a new campaign
**Requires Confirmation:** Yes
**Parameters:**
* `name` (TEXT, Required): Name of the campaign
* `type` (TEXT, Optional): Type of campaign (e.g., Email, Webinar, Trade Show)
* `status` (TEXT, Optional): Status of the campaign (e.g., Planned, In Progress, Completed)
* `startDate` (TEXT, Optional): Start date of the campaign (YYYY-MM-DD)
* `endDate` (TEXT, Optional): End date of the campaign (YYYY-MM-DD)
* `budgetedCost` (NUMBER, Optional): Budgeted cost of the campaign
* `actualCost` (NUMBER, Optional): Actual cost of the campaign
* `expectedRevenue` (NUMBER, Optional): Expected revenue from the campaign
* `description` (TEXT, Optional): Description of the campaign
**Output:** Returns the created campaign details
***
### Create case
##### `salesforce.createcase`
Creates a new case
**Requires Confirmation:** Yes
**Parameters:**
* `subject` (TEXT, Required): Subject of the case
* `status` (TEXT, Optional): Status of the case (e.g., New, In Progress, Closed)
* `priority` (TEXT, Optional): Priority of the case (e.g., High, Medium, Low)
* `origin` (TEXT, Optional): Origin of the case (e.g., Email, Phone, Web)
* `reason` (TEXT, Optional): Reason for the case
* `type` (TEXT, Optional): Type of the case
* `description` (TEXT, Optional): Description of the case
* `accountId` (TEXT, Optional): ID of the related account
* `contactId` (TEXT, Optional): ID of the related contact
**Output:** Returns the created case details
***
### Create contact
##### `salesforce.createcontact`
Creates a new contact
**Requires Confirmation:** Yes
**Parameters:**
* `firstName` (TEXT, Required): First name of the contact
* `lastName` (TEXT, Required): Last name of the contact
* `email` (TEXT, Optional): Email address of the contact
* `phone` (TEXT, Optional): Phone number of the contact
* `title` (TEXT, Optional): Job title of the contact
* `department` (TEXT, Optional): Department of the contact
* `accountId` (TEXT, Optional): ID of the related account
* `mailingStreet` (TEXT, Optional): Mailing street address
* `mailingCity` (TEXT, Optional): Mailing city
* `mailingState` (TEXT, Optional): Mailing state
* `mailingPostalCode` (TEXT, Optional): Mailing postal code
* `mailingCountry` (TEXT, Optional): Mailing country
**Output:** Returns the created contact details
***
### Create lead
##### `salesforce.createlead`
Creates a new lead
**Requires Confirmation:** Yes
**Parameters:**
* `firstName` (TEXT, Required): First name of the lead
* `lastName` (TEXT, Required): Last name of the lead
* `email` (TEXT, Optional): Email address of the lead
* `phone` (TEXT, Optional): Phone number of the lead
* `company` (TEXT, Optional): Company name
* `title` (TEXT, Optional): Job title of the lead
* `industry` (TEXT, Optional): Industry of the lead
* `status` (TEXT, Optional): Status of the lead (e.g., Open, Qualified, Unqualified)
* `rating` (TEXT, Optional): Rating of the lead (e.g., Hot, Warm, Cold)
* `source` (TEXT, Optional): Source of the lead (e.g., Web, Phone, Email)
* `street` (TEXT, Optional): Street address
* `city` (TEXT, Optional): City
* `state` (TEXT, Optional): State
* `postalCode` (TEXT, Optional): Postal code
* `country` (TEXT, Optional): Country
**Output:** Returns the created lead details
***
### Create opportunity
##### `salesforce.createopportunity`
Creates a new opportunity
**Requires Confirmation:** Yes
**Parameters:**
* `name` (TEXT, Required): Name of the opportunity
* `stageName` (TEXT, Required): Stage of the opportunity (e.g., Prospecting, Qualification, Proposal)
* `closeDate` (TEXT, Required): Close date of the opportunity (YYYY-MM-DD)
* `amount` (NUMBER, Optional): Amount of the opportunity
* `probability` (NUMBER, Optional): Probability percentage (0-100)
* `type` (TEXT, Optional): Type of the opportunity
* `leadSource` (TEXT, Optional): Lead source
* `description` (TEXT, Optional): Description of the opportunity
* `accountId` (TEXT, Optional): ID of the related account
* `contactId` (TEXT, Optional): ID of the related contact
**Output:** Returns the created opportunity details
***
### Create task
##### `salesforce.createtask`
Creates a new task
**Requires Confirmation:** Yes
**Parameters:**
* `subject` (TEXT, Required): Subject of the task
* `status` (TEXT, Optional): Status of the task (e.g., Not Started, In Progress, Completed)
* `priority` (TEXT, Optional): Priority of the task (e.g., High, Normal, Low)
* `activityDate` (TEXT, Optional): Due date of the task (YYYY-MM-DD)
* `description` (TEXT, Optional): Description of the task
* `whoId` (TEXT, Optional): ID of the related contact or lead
* `whatId` (TEXT, Optional): ID of the related account, opportunity, or case
**Output:** Returns the created task details
***
### Update account
##### `salesforce.updateaccount`
Updates an existing account
**Requires Confirmation:** Yes
**Parameters:**
* `accountId` (TEXT, Required): ID of the account to update
* `name` (TEXT, Optional): Updated name of the account
* `type` (TEXT, Optional): Updated type of account
* `industry` (TEXT, Optional): Updated industry of the account
* `phone` (TEXT, Optional): Updated phone number
* `website` (TEXT, Optional): Updated website URL
* `billingStreet` (TEXT, Optional): Updated billing street address
* `billingCity` (TEXT, Optional): Updated billing city
* `billingState` (TEXT, Optional): Updated billing state
* `billingPostalCode` (TEXT, Optional): Updated billing postal code
* `billingCountry` (TEXT, Optional): Updated billing country
* `shippingStreet` (TEXT, Optional): Updated shipping street address
* `shippingCity` (TEXT, Optional): Updated shipping city
* `shippingState` (TEXT, Optional): Updated shipping state
* `shippingPostalCode` (TEXT, Optional): Updated shipping postal code
* `shippingCountry` (TEXT, Optional): Updated shipping country
* `description` (TEXT, Optional): Updated description of the account
**Output:** Returns the updated account details
***
### Update campaign
##### `salesforce.updatecampaign`
Updates an existing campaign
**Requires Confirmation:** Yes
**Parameters:**
* `campaignId` (TEXT, Required): ID of the campaign to update
* `name` (TEXT, Optional): Updated name of the campaign
* `type` (TEXT, Optional): Updated type of campaign
* `status` (TEXT, Optional): Updated status of the campaign
* `startDate` (TEXT, Optional): Updated start date of the campaign
* `endDate` (TEXT, Optional): Updated end date of the campaign
* `budgetedCost` (NUMBER, Optional): Updated budgeted cost of the campaign
* `actualCost` (NUMBER, Optional): Updated actual cost of the campaign
* `expectedRevenue` (NUMBER, Optional): Updated expected revenue from the campaign
* `description` (TEXT, Optional): Updated description of the campaign
**Output:** Returns the updated campaign details
***
### Update case
##### `salesforce.updatecase`
Updates an existing case
**Requires Confirmation:** Yes
**Parameters:**
* `caseId` (TEXT, Required): ID of the case to update
* `subject` (TEXT, Optional): Updated subject of the case
* `status` (TEXT, Optional): Updated status of the case
* `priority` (TEXT, Optional): Updated priority of the case
* `origin` (TEXT, Optional): Updated origin of the case
* `reason` (TEXT, Optional): Updated reason for the case
* `type` (TEXT, Optional): Updated type of the case
* `description` (TEXT, Optional): Updated description of the case
* `accountId` (TEXT, Optional): Updated ID of the related account
* `contactId` (TEXT, Optional): Updated ID of the related contact
**Output:** Returns the updated case details
***
### Update contact
##### `salesforce.updatecontact`
Updates an existing contact
**Requires Confirmation:** Yes
**Parameters:**
* `contactId` (TEXT, Required): ID of the contact to update
* `firstName` (TEXT, Optional): Updated first name of the contact
* `lastName` (TEXT, Optional): Updated last name of the contact
* `email` (TEXT, Optional): Updated email address of the contact
* `phone` (TEXT, Optional): Updated phone number of the contact
* `title` (TEXT, Optional): Updated job title of the contact
* `department` (TEXT, Optional): Updated department of the contact
* `accountId` (TEXT, Optional): Updated ID of the related account
* `mailingStreet` (TEXT, Optional): Updated mailing street address
* `mailingCity` (TEXT, Optional): Updated mailing city
* `mailingState` (TEXT, Optional): Updated mailing state
* `mailingPostalCode` (TEXT, Optional): Updated mailing postal code
* `mailingCountry` (TEXT, Optional): Updated mailing country
**Output:** Returns the updated contact details
***
### Update lead
##### `salesforce.updatelead`
Updates an existing lead
**Requires Confirmation:** Yes
**Parameters:**
* `leadId` (TEXT, Required): ID of the lead to update
* `firstName` (TEXT, Optional): Updated first name of the lead
* `lastName` (TEXT, Optional): Updated last name of the lead
* `email` (TEXT, Optional): Updated email address of the lead
* `phone` (TEXT, Optional): Updated phone number of the lead
* `company` (TEXT, Optional): Updated company name
* `title` (TEXT, Optional): Updated job title of the lead
* `industry` (TEXT, Optional): Updated industry of the lead
* `status` (TEXT, Optional): Updated status of the lead
* `rating` (TEXT, Optional): Updated rating of the lead
* `source` (TEXT, Optional): Updated source of the lead
* `street` (TEXT, Optional): Updated street address
* `city` (TEXT, Optional): Updated city
* `state` (TEXT, Optional): Updated state
* `postalCode` (TEXT, Optional): Updated postal code
* `country` (TEXT, Optional): Updated country
**Output:** Returns the updated lead details
***
### Update opportunity
##### `salesforce.updateopportunity`
Updates an existing opportunity
**Requires Confirmation:** Yes
**Parameters:**
* `opportunityId` (TEXT, Required): ID of the opportunity to update
* `name` (TEXT, Optional): Updated name of the opportunity
* `stageName` (TEXT, Optional): Updated stage of the opportunity
* `closeDate` (TEXT, Optional): Updated close date of the opportunity
* `amount` (NUMBER, Optional): Updated amount of the opportunity
* `probability` (NUMBER, Optional): Updated probability percentage
* `type` (TEXT, Optional): Updated type of the opportunity
* `leadSource` (TEXT, Optional): Updated lead source
* `description` (TEXT, Optional): Updated description of the opportunity
* `accountId` (TEXT, Optional): Updated ID of the related account
* `contactId` (TEXT, Optional): Updated ID of the related contact
**Output:** Returns the updated opportunity details
***
### Update task
##### `salesforce.updatetask`
Updates an existing task
**Requires Confirmation:** Yes
**Parameters:**
* `taskId` (TEXT, Required): ID of the task to update
* `subject` (TEXT, Optional): Updated subject of the task
* `status` (TEXT, Optional): Updated status of the task
* `priority` (TEXT, Optional): Updated priority of the task
* `activityDate` (TEXT, Optional): Updated due date of the task
* `description` (TEXT, Optional): Updated description of the task
* `whoId` (TEXT, Optional): Updated ID of the related contact or lead
* `whatId` (TEXT, Optional): Updated ID of the related account, opportunity, or case
**Output:** Returns the updated task details
***
### Delete account
##### `salesforce.deleteaccount`
Deletes an account
**Requires Confirmation:** Yes
**Parameters:**
* `accountId` (TEXT, Required): ID of the account to delete
**Output:** Confirmation of deletion
***
### Delete campaign
##### `salesforce.deletecampaign`
Deletes a campaign
**Requires Confirmation:** Yes
**Parameters:**
* `campaignId` (TEXT, Required): ID of the campaign to delete
**Output:** Confirmation of deletion
***
### Delete case
##### `salesforce.deletecase`
Deletes a case
**Requires Confirmation:** Yes
**Parameters:**
* `caseId` (TEXT, Required): ID of the case to delete
**Output:** Confirmation of deletion
***
### Delete contact
##### `salesforce.deletecontact`
Deletes a contact
**Requires Confirmation:** Yes
**Parameters:**
* `contactId` (TEXT, Required): ID of the contact to delete
**Output:** Confirmation of deletion
***
### Delete lead
##### `salesforce.deletelead`
Deletes a lead
**Requires Confirmation:** Yes
**Parameters:**
* `leadId` (TEXT, Required): ID of the lead to delete
**Output:** Confirmation of deletion
***
### Delete opportunity
##### `salesforce.deleteopportunity`
Deletes an opportunity
**Requires Confirmation:** Yes
**Parameters:**
* `opportunityId` (TEXT, Required): ID of the opportunity to delete
**Output:** Confirmation of deletion
***
### Delete task
##### `salesforce.deletetask`
Deletes a task
**Requires Confirmation:** Yes
**Parameters:**
* `taskId` (TEXT, Required): ID of the task to delete
**Output:** Confirmation of deletion
***
## Common Use Cases
Manage and organize your Salesforce data
Automate workflows with Salesforce
Generate insights and reports
Connect Salesforce with other tools
## Best Practices
**Getting Started:**
1. Enable the Salesforce integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Salesforce integration, contact [support@langdock.com](mailto:support@langdock.com)
# SharePoint
Source: https://docs.langdock.com/administration/integrations/sharepoint
Microsoft SharePoint is a service that helps organizations share content
## Overview
Microsoft SharePoint is a service that helps organizations share content. Through Langdock's integration, you can access and manage SharePoint directly from your conversations.
**Authentication:** OAuth\
**Category:** Microsoft 365\
**Availability:** All workspace plans
## Available Actions
### Search files
##### `sharepoint.searchfiles`
Searches files by name and returns detailed information about each matching file
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Required): The filename / query to search for in SharePoint. The search is not case-insensitive and will return all items that match partially contain the input name. For example, you can search for specific documents like 'Budget 2023', 'Project proposal', or 'Meeting notes'
**Output:** Returns a list of files with details including URL, document ID, title, MIME type, author, creation date, and modification information
***
### Search SharePoint
##### `sharepoint.searchSharePoint`
Searches documents in SharePoint
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Optional): Search query for documents
**Output:** Returns search results for SharePoint documents
***
### List files in folder
##### `sharepoint.listfilesinfolder`
Lists all files in a SharePoint folder recursively, including subfolders. Filters out images, videos, and spreadsheets similar to Google Drive integration.
**Requires Confirmation:** No
**Parameters:**
* `folderId` (TEXT, Required): Folder configuration as JSON string with siteId, driveId, and/or folderId. Examples: siteId':'site-id','folderId':'folder-id for a specific folder, siteId':'site-id for site root, or siteId':'site-id','driveId':'drive-id for drive root.
* `parent` (TEXT, Required): Parent folder information
**Output:** Returns a list of files in the specified folder
***
### Download File
##### `sharepoint.downloadFile`
Downloads a file from SharePoint
**Requires Confirmation:** No
**Parameters:**
* `parent` (TEXT, Required): Parent folder information
* `itemId` (TEXT, Required): Item ID of the file to download
**Output:** Returns the downloaded file content
***
### Download SharePoint File
##### `sharepoint.downloadSharePointFile`
Downloads a file from SharePoint and attaches it to the chat
**Requires Confirmation:** No
**Parameters:**
* `parent` (OBJECT, Required): Parent object of the SharePoint file
* `itemId` (TEXT, Required): Item ID of the file to download
**Output:** Downloads and returns the file
***
## Common Use Cases
Manage and organize your SharePoint data
Automate workflows with SharePoint
Generate insights and reports
Connect SharePoint with other tools
## Best Practices
**Getting Started:**
1. **Prerequisite:** A Microsoft Admin must [approve the Langdock application](/administration/microsoft-admin-approval) in your Microsoft workspace once.
2. Enable the SharePoint integration in your workspace settings
3. Authenticate using OAuth
4. Test the connection with a simple read operation
5. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the SharePoint integration, contact [support@langdock.com](mailto:support@langdock.com)
# Slack
Source: https://docs.langdock.com/administration/integrations/slack
Team messaging platform connecting conversations, files and tools
## Overview
Team messaging platform connecting conversations, files and tools. Through Langdock's integration, you can access and manage Slack directly from your conversations.
**Authentication:** OAuth\
**Category:** Productivity & Collaboration\
**Availability:** All workspace plans
## Available Actions
### Get current user
##### `slack.getcurrentuser`
Gets information about the authenticated user
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns the current user's information
***
### Search messages
##### `slack.searchmessages`
Searches for messages matching a query
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Required): The search query used to search for messages. Plain text search query with optional modifiers. Supports location modifiers (in:channel\_name, in:@username), person modifiers (from:@username, to:@username), date/time modifiers (after:YYYY-MM-DD, before:YYYY-MM-DD), content type modifiers (has:link, has:image, has:file), status modifiers (is:saved, is:thread), and boolean operators (AND, OR, NOT)
**Output:** Returns a list of messages matching the search criteria
***
### Send message
##### `slack.sendmessage`
Sends a message to a channel
**Requires Confirmation:** Yes
**Parameters:**
* `channelId` (TEXT, Required): An encoded ID or channel name that represents a channel, private group, or IM channel to send the message to
* `text` (MULTI\_LINE\_TEXT, Required): The text of the message to send (max 3000 characters)
**Output:** Returns the sent message details
***
### Get conversation history
##### `slack.getconversationhistory`
Fetches a conversation's history of messages and events. All timestamps are handled in UTC timezone.
**Requires Confirmation:** No
**Parameters:**
* `channelId` (TEXT, Required): Conversation ID to fetch history for
* `latest` (TEXT, Optional): End of time range of messages to include in results. Messages sent after this timestamp will not be included. Default is the current time. Format: RFC3339 in UTC (e.g., '2024-03-20T15:30:00Z'). The 'Z' suffix indicates UTC timezone.
* `oldest` (TEXT, Optional): Start of time range of messages to include in results. Messages sent before this timestamp will not be included. Format: RFC3339 in UTC (e.g., '2024-03-19T15:30:00Z'). The 'Z' suffix indicates UTC timezone.
* `showThreads` (BOOLEAN, Optional): Whether to include thread replies in the response. If false, thread replies will be empty arrays.
**Output:** Returns the conversation history with messages and events
***
### Get channels
##### `slack.getchannels`
Gets all Slack channels (public and private team channels only)
**Requires Confirmation:** No
**Parameters:**
* `channelTypes` (SELECT, Optional): Filter channels by type. Select one or more channel types to include in results: Public Channels (open channels visible to all workspace members) or Private Channels (invite-only channels for specific teams or topics). Leave empty to include both Public and Private channels by default.
**Output:** Returns a list of channels
***
### Get people
##### `slack.getpeople`
Gets all Slack people
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns a list of people in the workspace
***
### Get channels by name
##### `slack.getchannelsbyname`
Searches for channels in your Slack workspace by name (public and private team channels only)
**Requires Confirmation:** No
**Parameters:**
* `channelName` (TEXT, Required): Search term used to find channels by their names. For example, you can search for 'general', 'marketing', 'support' or 'eng' to find channels with those terms in their names
* `channelTypes` (SELECT, Optional): Filter channels by type. Select one or more channel types to include in search results: Public Channels or Private Channels. Leave empty to search both Public and Private channels by default.
**Output:** Returns matching channels
***
### List user conversations
##### `slack.listuserconversations`
Lists all conversations (channels, DMs, group DMs) that a specific user is a member of
**Requires Confirmation:** No
**Parameters:**
* `userId` (TEXT, Required): The user ID to get conversations for. Use the User ID (starts with 'U') not the username. You can get user IDs from the 'Get people' or 'Search user by email' actions.
* `conversationTypes` (SELECT, Optional): Filter conversations by type. Select one or more conversation types to include in results: Public Channels, Private Channels, Direct Messages, or Group Direct Messages. Leave empty to include all conversation types the user is a member of.
**Output:** Returns a list of conversations for the user
***
### Search user by email
##### `slack.searchuserbyemail`
Looks up a Slack user by their email address
**Requires Confirmation:** No
**Parameters:**
* `email` (TEXT, Required): The email address of the user to look up
**Output:** Returns the user details if found
***
### Reply to message
##### `slack.replytomessage`
This action replies to a message in the thread
**Requires Confirmation:** Yes
**Parameters:**
* `channelId` (TEXT, Required): An encoded ID or channel name that represents a channel, private group, or IM channel to send the message to
* `text` (TEXT, Required): The text of the message to send
* `threadTs` (TEXT, Required): The Unix timestamp of the original message to ensure the reply is in the correct thread. Use the 'ts\_unix' or 'thread\_ts\_unix' field from 'Get conversation history' or 'Search messages' actions. Format: Unix timestamp with microseconds (e.g., '1710951000.123456')
**Output:** Returns the reply message details
#### Triggers
***
### New message by search
##### `slack.newmessagebysearch`
Triggers when a new message is found by searching for a specific keyword or other criteria
**Requires Confirmation:** No
**Parameters:**
* `keywords` (TEXT, Optional): Text to search for in messages (e.g., 'project update')
* `in` (TEXT, Optional): Syntax for 'in' parameter: in:channel\_name, in:group\_name, or `in:<@UserID>`. Example values: in:general, `in:<@U05K6TALQ87>`. The 'in:' prefix will be added automatically if not included. You can get the User Id from the user profile -> Click on the 'three dots button' -> 'Copy member ID'
* `from` (TEXT, Optional): Syntax for 'from' parameter: `from:<@UserID>` or from:botname. Example values: `from:<@U05K6TALQ87>` or from:slackbot. The 'from:' prefix will be added automatically if not included. You can get the User Id from the user profile -> Click on the 'three dots button' -> 'Copy member ID'
**Output:** Returns the operation result
***
### New message in channel
##### `slack.newmessageinchannel`
Triggers when a new message was posted in a channel (public, private, DM, etc.)
**Requires Confirmation:** No
**Parameters:**
* `channelId` (TEXT, Required): The id of the channel that should be monitored for new messages. A channel id can be found for public channels, private channels, private messages and private group messages. You can find the channel id of a conversation by clicking on the top left of a conversation (name), in the 'About' tab.
**Output:** Returns the operation result
***
### New message in conversations
##### `slack.newmessageinconversations`
Triggers when a new message is posted in a specific conversation (DM, group DM, or channel)
**Requires Confirmation:** No
**Parameters:**
* `conversationId` (TEXT, Required): The ID of the conversation to monitor for new messages. This can be: A direct message (DM) conversation ID (starts with 'D'), A group direct message conversation ID (starts with 'G'), or A channel ID (starts with 'C'). You can find conversation IDs using the 'List user conversations' action or by checking the conversation details in Slack.
* `latest` (TEXT, Optional): End of time range of messages to include in results. Messages sent after this timestamp will not be included. Default is the current time. Format: Unix timestamp (e.g., '1609459200' for 2021-01-01).
* `oldest` (TEXT, Optional): Start of time range of messages to include in results. Messages sent before this timestamp will not be included. Format: Unix timestamp (e.g., '1609372800' for 2020-12-31).
**Output:** Returns the operation result
***
## Common Use Cases
Manage and organize your Slack data
Automate workflows with Slack
Generate insights and reports
Connect Slack with other tools
## Best Practices
**Getting Started:**
1. Enable the Slack integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Slack integration, contact [support@langdock.com](mailto:support@langdock.com)
# Snowflake
Source: https://docs.langdock.com/administration/integrations/snowflake
Snowflake allows to store and analyze data using cloud-based hardware and software
## Overview
Snowflake allows to store and analyze data using cloud-based hardware and software. Through Langdock's integration, you can access and manage Snowflake directly from your conversations.
**Authentication:** API Key\
**Category:** Productivity & Collaboration\
**Availability:** All workspace plans
## Available Actions
### Execute SQL
##### `snowflake.executeSQL`
Execute SQL query and return resulting data
**Requires Confirmation:** Yes
**Parameters:**
* `sqlQuery` (MULTI\_LINE\_TEXT, Required): The SQL query to execute against Snowflake
**Output:** Returns the query results with data
***
### Search schema
##### `snowflake.searchschema`
Searches for schemas or tables containing specific keywords
**Requires Confirmation:** No
**Parameters:**
* `searchQuery` (TEXT, Required): The keyword to search for in schema or table names. For example, to find tables containing Salesforce data, search for 'salesforce'
* `searchType` (SELECT, Optional): What to search for: 'table' (default) searches for table names, 'schema' searches for schema names
**Output:** Returns matching schemas or tables
***
### Cortex search
##### `snowflake.cortexsearch`
Performs semantic search using Snowflake Cortex Search service
**Requires Confirmation:** No
**Parameters:**
* `searchServiceName` (TEXT, Required): The name of the Cortex Search service to use (must be created beforehand in Snowflake)
* `query` (TEXT, Required): The search query for semantic search. Cortex Search uses both vector and keyword methods to find relevant results
* `columns` (TEXT, Optional): Comma-separated list of columns to return in the search results. If not specified, all columns are returned
* `filter` (TEXT, Optional): SQL WHERE clause to filter search results. For example: category = 'support' AND status = 'active'
* `limit` (NUMBER, Optional): Maximum number of search results to return. Default is 10
**Output:** Returns semantic search results
***
## Common Use Cases
Manage and organize your Snowflake data
Automate workflows with Snowflake
Generate insights and reports
Connect Snowflake with other tools
## Best Practices
**Getting Started:**
1. Enable the Snowflake integration in your workspace settings
2. Authenticate using API Key
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your API Key credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Snowflake integration, contact [support@langdock.com](mailto:support@langdock.com)
# Statista
Source: https://docs.langdock.com/administration/integrations/statista
Statistics for everyone
## Overview
Statistics for everyone. Through Langdock's integration, you can access and manage Statista directly from your conversations.
**Authentication:** API Key\
**Category:** Productivity & Collaboration\
**Availability:** All workspace plans
## Available Actions
### Get statistics data by id
##### `statista.getstatisticsdatabyid`
Retrieves detailed statistical information for a specific Statista chart or dataset using its unique identifier.
**Requires Confirmation:** No
**Parameters:**
* `id` (NUMBER, Required): Retrieves detailed statistical information for a specific Statista chart or dataset using its unique identifier. This tool provides comprehensive data including full numerical values, methodological information, source details, and contextual metadata. It's designed to work seamlessly with results from the 'search-statistics' tool - simply pass an ID from your search results to access in-depth analysis. Use this tool to dive deeper into statistics of interest after identifying them through search. Always cite the source of the data in your response.
**Output:** Returns detailed statistical data for the specified ID
***
### Search statistics
##### `statista.searchstatistics`
Searches the comprehensive Statista data catalogue to discover relevant statistical content. This tool enables exploration of Statista's extensive library of charts, reports, and forecasts.
**Requires Confirmation:** No
**Parameters:**
* `q` (TEXT, Required): The question the user is asking. Can be keyword-like or a full natural language question
**Output:** Returns search results from Statista's data catalogue
***
## Common Use Cases
Manage and organize your Statista data
Automate workflows with Statista
Generate insights and reports
Connect Statista with other tools
## Best Practices
**Getting Started:**
1. Enable the Statista integration in your workspace settings
2. Authenticate using API Key
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your API Key credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Statista integration, contact [support@langdock.com](mailto:support@langdock.com)
# Stripe
Source: https://docs.langdock.com/administration/integrations/stripe
Complete payment processing platform with support for payments, subscriptions, invoicing, and financial services
## Overview
Complete payment processing platform with support for payments, subscriptions, invoicing, and financial services. Through Langdock's integration, you can access and manage Stripe directly from your conversations.
**Authentication:** API Key\
**Category:** Productivity & Collaboration\
**Availability:** All workspace plans
## Available Actions
### Create customer
##### `stripe.createcustomer`
Creates a new customer in Stripe. Use the Email and Name fields for basic info, and Custom Parameters for all other Stripe customer fields.
**Requires Confirmation:** Yes
**Parameters:**
* `email` (TEXT, Optional): Customer's email address. This will be their primary contact and used for invoices.
* `name` (TEXT, Optional): Customer's full name or business name. This appears on invoices and in the Stripe dashboard.
* `customParameters` (TEXT, Optional): Additional Stripe customer parameters as JSON object. Common fields: description, phone, address (with line1, city, postal\_code, country), shipping, metadata (for custom data like orgID, timezone), preferred\_locales, tax\_exempt, tax\_id\_data (array of tax IDs). For German VAT: tax\_id\_data: \[type': 'eu\_vat', 'value': 'DE123456789]. Full example: description': 'Company ABC', 'phone': '+1234567890', 'address': line1': '123 Main St', 'city': 'Berlin', 'postal\_code': '10115', 'country': 'DE, 'metadata': orgID': 'workspace\_123', 'timezone': 'Europe/Berlin, 'tax\_id\_data': \[type': 'eu\_vat', 'value': 'DE123456789]}
**Output:** Returns the created customer details
***
### Update customer
##### `stripe.updatecustomer`
Updates an existing customer's information
**Requires Confirmation:** Yes
**Parameters:**
* `customerId` (TEXT, Required): The ID of the customer to update (e.g., cus\_...)
* `email` (TEXT, Optional): Customer's email address
* `name` (TEXT, Optional): Customer's full name or business name
* `description` (TEXT, Optional): An arbitrary string that you can attach to a customer object
* `phone` (TEXT, Optional): Customer's phone number
* `metadata` (TEXT, Optional): Set of key-value pairs that you can attach to an object
**Output:** Returns the updated customer details
***
### Get customer
##### `stripe.getcustomer`
Retrieves a customer by ID
**Requires Confirmation:** No
**Parameters:**
* `customerId` (TEXT, Required): The ID of the customer to retrieve
**Output:** Returns the customer details
***
### List customers
##### `stripe.listcustomers`
Lists all customers with optional filtering
**Requires Confirmation:** No
**Parameters:**
* `email` (TEXT, Optional): Filter customers by email address
* `limit` (NUMBER, Optional): Maximum number of customers to return (1-100)
**Output:** Returns a list of customers
***
### Create payment intent
##### `stripe.createpaymentintent`
Creates a new payment intent for collecting payment
**Requires Confirmation:** Yes
**Parameters:**
* `amount` (NUMBER, Required): Amount to be collected in the smallest currency unit (e.g., cents for USD)
* `currency` (TEXT, Required): Three-letter ISO currency code (e.g., usd, eur, gbp)
* `customerId` (TEXT, Optional): ID of the customer this payment intent is for
* `description` (TEXT, Optional): An arbitrary string attached to the object
* `metadata` (TEXT, Optional): Set of key-value pairs that you can attach to an object
* `paymentMethodTypes` (TEXT, Optional): Array of payment method types to accept. Example: \['card', 'customer\_balance'] - use 'customer\_balance' for bank transfers
* `paymentMethodOptions` (TEXT, Optional): Additional options for payment methods. For bank transfers use: customer\_balance': funding\_type': 'bank\_transfer', 'bank\_transfer': type': 'eu\_bank\_transfer}} or 'us\_bank\_transfer' for US
**Output:** Returns the created payment intent details
***
### Confirm payment intent
##### `stripe.confirmpaymentintent`
Confirms a payment intent to finalize the payment
**Requires Confirmation:** Yes
**Parameters:**
* `paymentIntentId` (TEXT, Required): The ID of the payment intent to confirm
* `paymentMethodId` (TEXT, Optional): ID of the payment method to use
**Output:** Returns the confirmed payment intent details
***
### Create subscription
##### `stripe.createsubscription`
Creates a new subscription for a customer
**Requires Confirmation:** Yes
**Parameters:**
* `customerId` (TEXT, Required): The ID of the customer to subscribe
* `items` (TEXT, Required): List of subscription items, each with a price ID. Example: \[price': 'price\_1234]
* `trialPeriodDays` (NUMBER, Optional): Number of trial period days for the subscription
* `metadata` (TEXT, Optional): Set of key-value pairs that you can attach to an object
* `defaultPaymentMethod` (TEXT, Optional): ID of the default payment method for the subscription
* `collectionMethod` (SELECT, Optional): How to collect payment for the subscription. Use 'send\_invoice' for manual bank transfers
* `daysUntilDue` (NUMBER, Optional): Number of days until the invoice is due (only used when collection\_method is 'send\_invoice')
* `paymentSettings` (TEXT, Optional): Payment settings for the subscription. For bank transfers: payment\_method\_types': \['customer\_balance'], 'payment\_method\_options': customer\_balance': funding\_type': 'bank\_transfer}}
* `defaultTaxRates` (TEXT, Optional): Array of tax rate IDs to apply to the subscription. Example: \['txr\_1234']
* `coupon` (TEXT, Optional): The coupon ID to apply to this subscription
* `promotionCode` (TEXT, Optional): The promotion code ID to apply to this subscription
**Output:** Returns the created subscription details
***
### Cancel subscription
##### `stripe.cancelsubscription`
Cancels a customer's subscription
**Requires Confirmation:** Yes
**Parameters:**
* `subscriptionId` (TEXT, Required): The ID of the subscription to cancel
* `cancelAtPeriodEnd` (BOOLEAN, Optional): If true, subscription will be canceled at the end of the current period
**Output:** Returns the canceled subscription details
***
### Create product
##### `stripe.createproduct`
Creates a new product that can be used with prices
**Requires Confirmation:** Yes
**Parameters:**
* `name` (TEXT, Required): The product's name, meant to be displayable to the customer
* `description` (TEXT, Optional): The product's description, meant to be displayable to the customer
* `active` (BOOLEAN, Optional): Whether the product is currently available for purchase
* `metadata` (TEXT, Optional): Set of key-value pairs that you can attach to an object
**Output:** Returns the created product details
***
### Create price
##### `stripe.createprice`
Creates a new price for a product
**Requires Confirmation:** Yes
**Parameters:**
* `productId` (TEXT, Required): The ID of the product this price is for
* `unitAmount` (NUMBER, Required): The unit amount in the smallest currency unit (e.g., cents)
* `currency` (TEXT, Required): Three-letter ISO currency code
* `recurring` (TEXT, Optional): The recurring components of a price. Example: interval': 'month', 'interval\_count': 1}
* `nickname` (TEXT, Optional): A brief description of the price, hidden from customers
**Output:** Returns the created price details
***
### Create invoice
##### `stripe.createinvoice`
Creates a new invoice for a customer
**Requires Confirmation:** Yes
**Parameters:**
* `customerId` (TEXT, Required): The ID of the customer to invoice
* `description` (TEXT, Optional): An arbitrary string attached to the object
* `autoAdvance` (BOOLEAN, Optional): Controls whether Stripe will perform automatic collection of the invoice
* `collectionMethod` (SELECT, Optional): Either charge\_automatically or send\_invoice
* `daysUntilDue` (NUMBER, Optional): Number of days until the invoice is due (required when collection\_method is 'send\_invoice')
**Output:** Returns the created invoice details
***
### Add invoice item
##### `stripe.addinvoiceitem`
Add an item to a draft invoice
**Requires Confirmation:** Yes
**Parameters:**
* `customerId` (TEXT, Required): The ID of the customer (required)
* `invoiceId` (TEXT, Optional): The ID of the invoice to add the item to (optional - if not provided, creates a pending invoice item)
* `priceId` (TEXT, Optional): The ID of the price object
* `quantity` (NUMBER, Optional): The quantity of units for the item
* `amount` (NUMBER, Optional): The amount for a one-time charge (in cents)
* `description` (TEXT, Optional): Description for the invoice item
**Output:** Returns the added invoice item details
***
### Send invoice
##### `stripe.sendinvoice`
Sends an invoice to the customer
**Requires Confirmation:** Yes
**Parameters:**
* `invoiceId` (TEXT, Required): The ID of the invoice to send
**Output:** Returns the sent invoice details
***
### Create payment method
##### `stripe.createpaymentmethod`
Creates a payment method object representing a customer's payment instrument
**Requires Confirmation:** Yes
**Parameters:**
* `type` (SELECT, Required): The type of payment method (us\_bank\_account, sepa\_debit)
* `sepaDebit` (TEXT, Optional): SEPA bank account details if type is 'sepa\_debit'. Example: iban': 'DE89370400440532013000
* `usBankAccount` (TEXT, Optional): US bank account details if type is 'us\_bank\_account'. Example: account\_number': '000123456789', 'routing\_number': '110000000', 'account\_holder\_type': 'individual
* `billingDetails` (TEXT, Optional): Billing information (required for bank accounts). Example: name': 'John Doe', 'email': '[john@example.com](mailto:john@example.com)', 'phone': '+15555555555', 'address': line1': '123 Main St', 'city': 'San Francisco', 'state': 'CA', 'postal\_code': '94111', 'country': 'US}
**Output:** Returns the created payment method details
***
### Attach payment method
##### `stripe.attachpaymentmethod`
Attaches a payment method to a customer
**Requires Confirmation:** Yes
**Parameters:**
* `paymentMethodId` (TEXT, Required): The ID of the payment method to attach
* `customerId` (TEXT, Required): The ID of the customer to attach the payment method to
**Output:** Returns the attached payment method details
***
### Create charge
##### `stripe.createcharge`
Creates a new charge on a payment source
**Requires Confirmation:** Yes
**Parameters:**
* `amount` (NUMBER, Required): Amount to charge in the smallest currency unit (e.g., cents)
* `currency` (TEXT, Required): Three-letter ISO currency code
* `customerId` (TEXT, Optional): The ID of the customer to charge
* `source` (TEXT, Optional): Payment source to charge (payment method ID or token)
* `description` (TEXT, Optional): An arbitrary string attached to the charge
**Output:** Returns the created charge details
***
### Create refund
##### `stripe.createrefund`
Refunds a charge that has been previously created
**Requires Confirmation:** Yes
**Parameters:**
* `chargeId` (TEXT, Optional): The ID of the charge to refund
* `paymentIntentId` (TEXT, Optional): The ID of the payment intent to refund
* `amount` (NUMBER, Optional): Amount to refund in cents. If not provided, the entire charge is refunded
* `reason` (SELECT, Optional): Reason for the refund (duplicate, fraudulent, requested\_by\_customer)
**Output:** Returns the created refund details
***
### List charges
##### `stripe.listcharges`
Lists all charges with optional filtering
**Requires Confirmation:** No
**Parameters:**
* `customerId` (TEXT, Optional): Only return charges for this customer
* `limit` (NUMBER, Optional): Maximum number of charges to return (1-100)
**Output:** Returns a list of charges
***
### List payment intents
##### `stripe.listpaymentintents`
Lists all payment intents with optional filtering
**Requires Confirmation:** No
**Parameters:**
* `customerId` (TEXT, Optional): Only return payment intents for this customer
* `limit` (NUMBER, Optional): Maximum number of payment intents to return (1-100)
**Output:** Returns a list of payment intents
***
### List subscriptions
##### `stripe.listsubscriptions`
Lists all subscriptions with optional filtering
**Requires Confirmation:** No
**Parameters:**
* `customerId` (TEXT, Optional): Only return subscriptions for this customer
* `status` (SELECT, Optional): Only return subscriptions with this status (active, past\_due, unpaid, canceled, incomplete, incomplete\_expired, trialing)
* `limit` (NUMBER, Optional): Maximum number of subscriptions to return (1-100)
**Output:** Returns a list of subscriptions
***
### List invoices
##### `stripe.listinvoices`
Lists all invoices with optional filtering
**Requires Confirmation:** No
**Parameters:**
* `customerId` (TEXT, Optional): Only return invoices for this customer
* `status` (SELECT, Optional): Only return invoices with this status (draft, open, paid, uncollectible, void)
* `limit` (NUMBER, Optional): Maximum number of invoices to return (1-100)
**Output:** Returns a list of invoices
***
### Create checkout session
##### `stripe.createcheckoutsession`
Creates a Stripe Checkout session for payment collection
**Requires Confirmation:** Yes
**Parameters:**
* `successUrl` (TEXT, Required): The URL to redirect to after successful payment
* `cancelUrl` (TEXT, Required): The URL to redirect to if the customer cancels payment
* `mode` (SELECT, Required): The mode of the Checkout Session (payment, subscription, setup)
* `lineItems` (TEXT, Optional): List of items the customer is purchasing. Example: \[price': 'price\_1234', 'quantity': 2}]
* `customerId` (TEXT, Optional): ID of an existing customer, if one exists
**Output:** Returns the created checkout session details
***
### Retrieve balance
##### `stripe.retrievebalance`
Retrieves the current account balance
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns the current account balance
***
### Get customer funding instructions
##### `stripe.getcustomerfundinginstructions`
Retrieves bank transfer funding instructions for a customer's cash balance in a specific currency
**Requires Confirmation:** No
**Parameters:**
* `customerId` (TEXT, Required): The ID of the customer to retrieve funding instructions for
* `currency` (TEXT, Optional): The currency to retrieve funding instructions for (e.g., eur, usd)
**Output:** Returns the funding instructions
***
### List transactions
##### `stripe.listtransactions`
Lists all balance transactions
**Requires Confirmation:** No
**Parameters:**
* `limit` (NUMBER, Optional): Maximum number of transactions to return (1-100)
* `type` (TEXT, Optional): Only return transactions of this type
**Output:** Returns a list of balance transactions
***
### Create tax rate
##### `stripe.createtaxrate`
Creates a new tax rate
**Requires Confirmation:** Yes
**Parameters:**
* `displayName` (TEXT, Required): The display name of the tax rate (e.g., 'German VAT')
* `percentage` (NUMBER, Required): The tax rate percentage (e.g., 19 for 19%)
* `inclusive` (BOOLEAN, Optional): Whether the tax rate is inclusive (true) or exclusive (false)
* `country` (TEXT, Optional): Two-letter country code (e.g., 'DE' for Germany)
* `description` (TEXT, Optional): Description of the tax rate
* `metadata` (TEXT, Optional): Set of key-value pairs that you can attach to an object
**Output:** Returns the created tax rate details
***
### Update subscription
##### `stripe.updatesubscription`
Updates an existing subscription
**Requires Confirmation:** Yes
**Parameters:**
* `subscriptionId` (TEXT, Required): The ID of the subscription to update
* `defaultTaxRates` (TEXT, Optional): Array of tax rate IDs to apply to the subscription. Example: \['txr\_1234']
* `items` (TEXT, Optional): Array of subscription items to update with tax rates. Example: \[id': 'si\_xxx', 'tax\_rates': \['txr\_xxx']}]
* `trialEnd` (TEXT, Optional): Unix timestamp for trial end
* `cancelAtPeriodEnd` (BOOLEAN, Optional): Whether to cancel at period end
* `description` (TEXT, Optional): Description for the subscription
* `metadata` (TEXT, Optional): Set of key-value pairs that you can attach to an object
* `coupon` (TEXT, Optional): The coupon ID to apply to this subscription
* `promotionCode` (TEXT, Optional): The promotion code ID to apply to this subscription
**Output:** Returns the updated subscription details
***
### List tax IDs
##### `stripe.listtaxIDs`
Lists all tax IDs for a customer
**Requires Confirmation:** No
**Parameters:**
* `customerId` (TEXT, Required): The ID of the customer to list tax IDs for
**Output:** Returns a list of tax IDs for the customer
***
### Get tax ID
##### `stripe.gettaxID`
Retrieves a specific tax ID for a customer
**Requires Confirmation:** No
**Parameters:**
* `customerId` (TEXT, Required): The ID of the customer
* `taxId` (TEXT, Required): The ID of the tax ID to retrieve
**Output:** Returns the specific tax ID details
***
### List tax rates
##### `stripe.listtaxrates`
Lists all tax rates in your Stripe account
**Requires Confirmation:** No
**Parameters:**
* `active` (BOOLEAN, Optional): Filter by active status (true/false)
* `limit` (NUMBER, Optional): Maximum number of tax rates to return (1-100)
**Output:** Returns a list of tax rates
***
### Get invoice
##### `stripe.getinvoice`
Retrieves a specific invoice by ID with full details including line items
**Requires Confirmation:** No
**Parameters:**
* `invoiceId` (TEXT, Required): The ID of the invoice to retrieve (e.g., in\_...)
**Output:** Returns the invoice details with line items
***
### Get subscription
##### `stripe.getsubscription`
Retrieves detailed subscription information including current period, items, and products
**Requires Confirmation:** No
**Parameters:**
* `subscriptionId` (TEXT, Required): The ID of the subscription to retrieve (e.g., sub\_...)
**Output:** Returns the subscription details
***
### List overdue invoices
##### `stripe.listoverdueinvoices`
Lists all invoices that are past their due date, sorted by days overdue
**Requires Confirmation:** No
**Parameters:**
* `customerId` (TEXT, Optional): Only return overdue invoices for this customer
* `limit` (NUMBER, Optional): Maximum number of overdue invoices to return (1-100)
* `daysOverdue` (NUMBER, Optional): Only return invoices that have been overdue for at least this many days
**Output:** Returns a list of overdue invoices
***
## Common Use Cases
Manage and organize your Stripe data
Automate workflows with Stripe
Generate insights and reports
Connect Stripe with other tools
## Best Practices
**Getting Started:**
1. Enable the Stripe integration in your workspace settings
2. Authenticate using API Key
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your API Key credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Stripe integration, contact [support@langdock.com](mailto:support@langdock.com)
# Tableau
Source: https://docs.langdock.com/administration/integrations/tableau
Business intelligence and data visualization platform for analytics
## Overview
Business intelligence and data visualization platform for analytics. Through Langdock's integration, you can access and manage Tableau directly from your conversations.
**Authentication:** API Key\
**Category:** Productivity & Collaboration\
**Availability:** All workspace plans
## Available Actions
### Query view data
##### `tableau.queryviewdata`
Extracts data from a specific Tableau view in CSV format
**Requires Confirmation:** No
**Parameters:**
* `workbookId` (TEXT, Required): The unique identifier of the workbook containing the view
* `viewId` (TEXT, Required): The unique identifier of the view to extract data from
* `filters` (TEXT, Optional): URL parameters to filter the view data, e.g. 'vf\_customer=Fraport\&vf\_date\_range=last\_30\_days'
**Output:** Returns the view data in CSV format
***
### List workbooks
##### `tableau.listworkbooks`
Gets all workbooks available in the site
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns a list of workbooks
***
### List views for workbook
##### `tableau.listviewsforworkbook`
Gets all views within a specific workbook
**Requires Confirmation:** No
**Parameters:**
* `workbookId` (TEXT, Required): The unique identifier of the workbook to get views from
**Output:** Returns a list of views for the workbook
***
### Get view image
##### `tableau.getviewimage`
Downloads a PNG image of a specific Tableau view
**Requires Confirmation:** No
**Parameters:**
* `workbookId` (TEXT, Required): The unique identifier of the workbook containing the view
* `viewId` (TEXT, Required): The unique identifier of the view to capture as image
* `filters` (TEXT, Optional): URL parameters to filter the view before capturing image
**Output:** Returns a PNG image of the view
***
## Common Use Cases
Manage and organize your Tableau data
Automate workflows with Tableau
Generate insights and reports
Connect Tableau with other tools
## Best Practices
**Getting Started:**
1. Enable the Tableau integration in your workspace settings
2. Authenticate using API Key
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your API Key credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Tableau integration, contact [support@langdock.com](mailto:support@langdock.com)
# Vertex AI Vector Search
Source: https://docs.langdock.com/administration/integrations/vertex-ai
Vector search engine with semantic search capabilities
## Overview
Vector search engine with semantic search capabilities. Through Langdock's integration, you can access and manage Vertex AI Vector Search directly from your conversations.
**Authentication:** Service Account\
**Category:** AI & Search\
**Availability:** All workspace plans
## Available Actions
### Search vector index
##### `vertexaivectorsearch.searchvectorindex`
Searches the database for the most relevant information based on the query provided
**Requires Confirmation:** No
**Parameters:**
* `query` (VECTOR, Required): The search query for vector similarity search
* `publicDomainName` (TEXT, Required): The public domain name of the vector index you want to query can be found in Google Cloud Console: Vertex AI → Vector Search → Index Endpoints → \[Your Endpoint] → Endpoint info
* `projectIdNumber` (TEXT, Required): The id is the "name" of your google project, number is the associated number, you can find both in the Google Cloud Console -> click the settings at utilities in the top right -> in open menu click "Project Settings" -> you'll see both Project ID and Project Number listed
* `region` (TEXT, Required): The region of the index / vector database you want to query, can be found in Google Cloud Console: Vertex AI → Vector Search → Index Endpoints → \[Your Endpoint] → Endpoint info, example format: us-central1
* `indexEndpointId` (TEXT, Required): The unique identifier of your Index / vector database, can be found in Google Cloud Console: Vertex AI → Vector Search → Index Endpoints → \[Your Endpoint] → Endpoint info
* `deployedIndexId` (TEXT, Required): The deployment name of your Index / vector database, can be found in Google Cloud Console: Vertex AI → Vector Search → Index Endpoints → \[Your Endpoint] → Endpoint info -> Deployed index column in the table
**Output:** Returns the most relevant search results from the vector index
***
## Common Use Cases
Manage and organize your Vertex AI Vector Search data
Automate workflows with Vertex AI Vector Search
Generate insights and reports
Connect Vertex AI Vector Search with other tools
## Best Practices
**Getting Started:**
1. Enable the Vertex AI Vector Search integration in your workspace settings
2. Authenticate using Service Account
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your Service Account credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Vertex AI Vector Search integration, contact [support@langdock.com](mailto:support@langdock.com)
# Wrike
Source: https://docs.langdock.com/administration/integrations/wrike
Wrike is a collaborative work management platform that helps teams plan, manage, and complete projects faster
## Overview
Wrike is a collaborative work management platform that helps teams plan, manage, and complete projects faster. Through Langdock's integration, you can access and manage Wrike directly from your conversations.
**Authentication:** OAuth\
**Category:** Productivity & Collaboration\
**Availability:** All workspace plans
## Available Actions
### Get folder/project
##### `wrike.getfolderproject`
Get detailed information about a specific folder or project
**Requires Confirmation:** No
**Parameters:**
* `folderId` (TEXT, Required): ID of the folder or project (supports comma-separated list up to 1000 IDs)
* `withInvitations` (BOOLEAN, Optional): Include invitations in sharedIds list
* `plainTextCustomFields` (BOOLEAN, Optional): Strip HTML tags from custom fields
* `fields` (TEXT, Optional): Comma-separated list of optional fields to include
**Output:** Returns detailed folder/project information
***
### Create task
##### `wrike.createtask`
Create a new task in a folder or project. Supports HTML formatting in task description.
**Requires Confirmation:** Yes
**Parameters:**
* `folderId` (TEXT, Required): ID of the folder or project where task will be created
* `title` (TEXT, Required): Task title
* `description` (TEXT, Optional): Task description with HTML support
* `status` (TEXT, Optional): Task status (Active, Completed, Deferred, Cancelled)
* `importance` (TEXT, Optional): Task importance (High, Normal, Low)
* `dates` (TEXT, Optional): Task scheduling in JSON format
* `responsibles` (TEXT, Optional): JSON array of user IDs to assign
* `shareds` (TEXT, Optional): JSON array of user IDs to share task with
* `parents` (TEXT, Optional): JSON array of parent folder IDs
* `followers` (TEXT, Optional): JSON array of user IDs to add as followers
* `follow` (BOOLEAN, Optional): Follow the task yourself
* `priorityBefore` (TEXT, Optional): Put newly created task before this task ID in task list
* `priorityAfter` (TEXT, Optional): Put newly created task after this task ID in task list
* `superTasks` (TEXT, Optional): JSON array of parent task IDs to make this a subtask
* `metadata` (TEXT, Optional): JSON array of metadata entries
* `customFields` (TEXT, Optional): JSON array of custom field values
* `customStatus` (TEXT, Optional): Custom status ID for the task
**Output:** Returns the created task details
***
### Update task
##### `wrike.updatetask`
Update single or multiple tasks
**Requires Confirmation:** Yes
**Parameters:**
* `taskId` (TEXT, Required): ID of the task to update (or use taskIds for multiple)
* `title` (TEXT, Optional): New task title
* `description` (TEXT, Optional): New task description
* `status` (TEXT, Optional): New task status (Active, Completed, Deferred, Cancelled)
* `importance` (TEXT, Optional): New task importance (High, Normal, Low)
* `dates` (TEXT, Optional): Update task scheduling in JSON format
* `addParents` (TEXT, Optional): Put task into specified folders. JSON array of folder IDs
* `removeParents` (TEXT, Optional): Remove task from specified folders. JSON array of folder IDs
* `addShareds` (TEXT, Optional): Share task with specified users or invitations. JSON array of contact IDs
* `removeShareds` (TEXT, Optional): Unshare task from specified users or invitations. JSON array of contact IDs
* `addResponsibles` (TEXT, Optional): Add specified users or invitations to assignee list. JSON array of contact IDs
* `removeResponsibles` (TEXT, Optional): Remove specified users or invitations from assignee list. JSON array of contact IDs
* `addResponsiblePlaceholders` (TEXT, Optional): Add specified placeholders to placeholder assignee list. JSON array
* `removeResponsiblePlaceholders` (TEXT, Optional): Remove specified placeholders from placeholder assignee list. JSON array
* `addFollowers` (TEXT, Optional): Add specified users to task followers. JSON array of contact IDs
* `follow` (BOOLEAN, Optional): Follow task yourself
* `priorityBefore` (TEXT, Optional): Put task in task list before specified task ID
* `priorityAfter` (TEXT, Optional): Put task in task list after specified task ID
* `addSuperTasks` (TEXT, Optional): Add the task as subtask to specified tasks. JSON array of task IDs
* `removeSuperTasks` (TEXT, Optional): Remove the task from specified tasks subtasks. JSON array of task IDs
* `metadata` (TEXT, Optional): Metadata to be updated (null value removes entry). JSON array of key-value pairs
* `customFields` (TEXT, Optional): Custom fields to be updated or deleted (null value removes field). JSON array
* `customStatus` (TEXT, Optional): Custom status ID
* `restore` (BOOLEAN, Optional): Restore task from Recycled Bin
* `effortAllocation` (TEXT, Optional): Set Task Effort fields: mode, totalEffort. JSON object
* `billingType` (TEXT, Optional): Task's timelogs billing type
* `withInvitations` (BOOLEAN, Optional): Include invitations in sharedIds & responsibleIds lists
* `convertToCustomItemType` (TEXT, Optional): Custom Item Type ID to convert task to
* `plainTextCustomFields` (BOOLEAN, Optional): Strip HTML tags from custom fields
* `fields` (TEXT, Optional): JSON array of optional fields to be included in the response
**Output:** Returns the updated task details
***
### Create custom field
##### `wrike.createcustomfield`
Create custom field in specified account
**Requires Confirmation:** Yes
**Parameters:**
* `title` (TEXT, Required): Custom field title
* `type` (TEXT, Required): Type of custom field (Text, Numeric, Currency, Percentage, Date, Duration, DropDown, Multiple, Checkbox, Contacts, LinkToDatabase)
* `spaceId` (TEXT, Optional): Optional space ID
* `sharing` (TEXT, Optional): JSON object for sharing settings
* `settings` (TEXT, Optional): JSON object for field-specific settings
* `shareds` (TEXT, Optional): Comma-separated list of user IDs to share field with (deprecated, use sharing instead)
* `description` (TEXT, Optional): Custom field description
**Output:** Returns the created custom field details
***
### Create dependency
##### `wrike.createdependency`
Add a dependency between tasks
**Requires Confirmation:** Yes
**Parameters:**
* `taskId` (TEXT, Required): ID of the task
* `relationType` (TEXT, Required): Type of dependency (FinishToStart, StartToStart, FinishToFinish, StartToFinish)
* `predecessorId` (TEXT, Optional): ID of the predecessor task (leave empty if using successorId)
* `successorId` (TEXT, Optional): ID of the successor task (leave empty if using predecessorId)
* `lagTime` (NUMBER, Optional): Lag time in minutes
**Output:** Returns the created dependency details
***
### Create folder comment
##### `wrike.createfoldercomment`
Create a new comment in a folder. Supports HTML formatting in comment text.
**Requires Confirmation:** Yes
**Parameters:**
* `folderId` (TEXT, Required): ID of the folder or project
* `text` (TEXT, Required): Comment text. Cannot be empty. Supported HTML tags for formatting when plainText=false.
* `plainText` (BOOLEAN, Optional): Set to true for plain text, false for HTML format (default: false)
**Output:** Returns the created comment details
***
### Create group
##### `wrike.creategroup`
Create new groups in the account
**Requires Confirmation:** Yes
**Parameters:**
* `title` (TEXT, Required): Title for the new group
* `members` (TEXT, Optional): Array of user IDs to add as members (JSON or comma-separated)
* `parent` (TEXT, Optional): ID of parent group (optional)
* `avatar` (TEXT, Optional): Avatar configuration as JSON object or initials text
* `metadata` (TEXT, Optional): JSON array of key-value pairs for group metadata
**Output:** Returns the created group details
***
### Create invitation
##### `wrike.createinvitation`
Create an invitation into the current account
**Requires Confirmation:** Yes
**Parameters:**
* `email` (TEXT, Required): Email address for the invitation
* `firstName` (TEXT, Optional): First name of the invitee
* `lastName` (TEXT, Optional): Last name of the invitee
* `userTypeId` (TEXT, Optional): Modern user type ID (preferred over role/external)
* `role` (TEXT, Optional): User role (User, Admin, Collaborator) - deprecated, use userTypeId
* `external` (BOOLEAN, Optional): Set to true for external user - deprecated, use userTypeId
* `subject` (TEXT, Optional): Custom email subject (not available for free accounts)
* `message` (TEXT, Optional): Custom email message (not available for free accounts)
**Output:** Returns the created invitation details
***
### Create space
##### `wrike.createspace`
Create a new space with specified configuration
**Requires Confirmation:** Yes
**Parameters:**
* `title` (TEXT, Required): Title for the new space
* `accessType` (TEXT, Required): Space access type (Locked, Private, or Public)
* `description` (TEXT, Optional): Optional space description
* `members` (TEXT, Optional): JSON array of member objects. Each must have: id (user ID), accessRoleId (role ID), and isManager (boolean)
* `guestRoleId` (TEXT, Optional): Role ID for guest access (for public spaces)
* `defaultProjectWorkflowId` (TEXT, Optional): Default workflow ID for projects in this space
* `suggestedProjectWorkflows` (TEXT, Optional): JSON array of suggested project workflow IDs
* `defaultTaskWorkflowId` (TEXT, Optional): Default workflow ID for tasks in this space
* `suggestedTaskWorkflows` (TEXT, Optional): JSON array of suggested task workflow IDs
* `withInvitations` (BOOLEAN, Optional): Send email invitations to new members
* `fields` (TEXT, Optional): JSON array of optional fields to include in response
**Output:** Returns the created space details
***
### Create task comment
##### `wrike.createtaskcomment`
Create a new comment in a task. Supports HTML formatting in comment text.
**Requires Confirmation:** Yes
**Parameters:**
* `taskId` (TEXT, Required): ID of the task
* `text` (TEXT, Required): Comment text. Cannot be empty. Supported HTML tags for formatting when plainText=false.
* `plainText` (BOOLEAN, Optional): Set to true for plain text, false for HTML format (default: false)
**Output:** Returns the created comment details
***
### Get access roles
##### `wrike.getaccessroles`
Returns all access roles in the account
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns all access roles
***
### Get account
##### `wrike.getaccount`
Returns current account information
**Requires Confirmation:** No
**Parameters:**
* `includeCustomFields` (BOOLEAN, Optional): Include custom fields in response
* `includeMetadata` (BOOLEAN, Optional): Include metadata in response
* `includeSubscription` (BOOLEAN, Optional): Include subscription information in response
* `metadataFilter` (TEXT, Optional): JSON object for metadata filtering
**Output:** Returns account information
***
### Get all comments
##### `wrike.getallcomments`
Get all comments in current account
**Requires Confirmation:** No
**Parameters:**
* `plainText` (BOOLEAN, Optional): Return comments as plain text instead of HTML
* `types` (TEXT, Optional): Comma-separated list of comment types (Regular, Email)
* `createdDate` (TEXT, Optional): JSON date range object (max 7 days)
* `limit` (NUMBER, Optional): Maximum number of comments to return (default 1000)
* `fields` (TEXT, Optional): Comma-separated list of additional fields to include
* `groupByAuthor` (BOOLEAN, Optional): Group comments by author
* `groupByDate` (BOOLEAN, Optional): Group comments by date
**Output:** Returns all comments
***
### Get all contacts
##### `wrike.getallcontacts`
List contacts of all users and user groups in current account
**Requires Confirmation:** No
**Parameters:**
* `me` (BOOLEAN, Optional): Return only requesting user's contact info
* `metadata` (TEXT, Optional): JSON metadata filter for exact key or key-value match
* `deleted` (BOOLEAN, Optional): Include deleted contacts
* `customFields` (TEXT, Optional): Comma-separated list of custom field IDs to include
* `emails` (TEXT, Optional): Comma-separated list of email addresses to filter by
* `active` (BOOLEAN, Optional): Filter by active status
* `name` (TEXT, Optional): Filter contacts by name
* `types` (TEXT, Optional): Comma-separated list of types (Person, Group, Robot)
* `fields` (TEXT, Optional): Comma-separated list of additional fields to include
**Output:** Returns all contacts
***
### Get custom fields
##### `wrike.getcustomfields`
Get custom fields - either all fields or specific fields by IDs
**Requires Confirmation:** No
**Parameters:**
* `customFieldIds` (TEXT, Optional): Comma-separated list of custom field IDs (up to 1000). If not provided, returns all custom fields.
* `applicableEntityTypes` (TEXT, Optional): Comma-separated list of entity types (default: WorkItem)
* `types` (TEXT, Optional): Comma-separated list of custom field types to filter by (only for getting all fields)
* `inheritanceTypes` (TEXT, Optional): Comma-separated list of inheritance types (only for getting all fields)
* `title` (TEXT, Optional): Filter custom fields by title (only for getting all fields)
**Output:** Returns custom fields
***
### Get folder tree
##### `wrike.getfoldertree`
Returns folders in tree or flat mode with organizational analysis
**Requires Confirmation:** No
**Parameters:**
* `folderId` (TEXT, Optional): Optional - Get folders from specific folder
* `spaceId` (TEXT, Optional): Optional - Get folders from specific space
* `permalink` (TEXT, Optional): Filter by specific permalink
* `descendants` (BOOLEAN, Optional): Include descendant folders (affects tree/folders mode)
* `metadata` (TEXT, Optional): JSON object for metadata filtering
* `customFields` (TEXT, Optional): Filter by custom fields (JSON array)
* `customField` (TEXT, Optional): Deprecated - use customFields instead
* `updatedDate` (TEXT, Optional): JSON date range for updated date filter
* `withInvitations` (BOOLEAN, Optional): Include invitations in sharedIds list
* `project` (BOOLEAN, Optional): Filter by project status (true = only projects, false = only folders)
* `deleted` (BOOLEAN, Optional): Include deleted folders (true = Recycle Bin, false = Root)
* `contractTypes` (TEXT, Optional): JSON array of contract types to filter
* `plainTextCustomFields` (BOOLEAN, Optional): Strip HTML tags from custom fields
* `customItemTypes` (TEXT, Optional): JSON array of custom item type IDs
* `pageSize` (NUMBER, Optional): Number of folders per page (max 1000, only for folders mode)
* `nextPageToken` (TEXT, Optional): Pagination token for next page
* `customStatuses` (TEXT, Optional): JSON array of custom status IDs
* `authors` (TEXT, Optional): JSON array of author user IDs
* `owners` (TEXT, Optional): JSON array of owner user IDs
* `startDate` (TEXT, Optional): JSON date range for start date filter
* `endDate` (TEXT, Optional): JSON date range for end date filter
* `completedDate` (TEXT, Optional): JSON date range for completed date filter
* `title` (TEXT, Optional): Filter folders by title (contains match)
* `fields` (TEXT, Optional): Comma-separated list of optional fields to include
**Output:** Returns folder tree structure
***
### Create workflow
##### `wrike.createworkflow`
Creates a new workflow in the account
**Requires Confirmation:** Yes
**Parameters:**
* `name` (TEXT, Required): Name of the workflow (max 128 characters)
* `description` (TEXT, Optional): Optional workflow description (max 2000 characters)
**Output:** Returns the created workflow details
***
### Get task attachments
##### `wrike.gettaskattachments`
Returns all attachments of a task
**Requires Confirmation:** No
**Parameters:**
* `taskId` (TEXT, Required): ID of the task to get attachments for
* `versions` (BOOLEAN, Optional): Include all versions of attachments
* `withUrls` (BOOLEAN, Optional): Include download URLs (valid for 24 hours)
* `createdDate` (TEXT, Optional): JSON date range filter - start': 'YYYY-MM-DD', 'end': 'YYYY-MM-DD (max 31 days)
**Output:** Returns task attachments
***
### Get task comments
##### `wrike.gettaskcomments`
Get comments for a specific task
**Requires Confirmation:** No
**Parameters:**
* `taskId` (TEXT, Required): ID of the task to get comments for
* `plainText` (BOOLEAN, Optional): Return comments in plain text format
* `types` (TEXT, Optional): Comma-separated list of comment types (Regular, Email)
* `groupByAuthor` (BOOLEAN, Optional): Group comments by author ID
* `sortBy` (SELECT, Optional): Sort comments by date (newest or oldest)
* `fields` (TEXT, Optional): Comma-separated list of additional fields to include
**Output:** Returns task comments
***
### Get task dependencies
##### `wrike.gettaskdependencies`
Query all dependencies for a specific task
**Requires Confirmation:** No
**Parameters:**
* `taskId` (TEXT, Optional): ID of the task to get dependencies for (required if dependencyIds not provided)
* `dependencyIds` (TEXT, Optional): Specific dependency IDs to retrieve (comma-separated, max 100)
**Output:** Returns task dependencies
***
### Get user
##### `wrike.getuser`
Returns information about single user
**Requires Confirmation:** No
**Parameters:**
* `userId` (TEXT, Required): ID of the user to retrieve
**Output:** Returns user information
***
### Modify workflow
##### `wrike.modifyworkflow`
Updates workflow configuration or adds/modifies custom statuses
**Requires Confirmation:** Yes
**Parameters:**
* `workflowId` (TEXT, Required): ID of the workflow to modify
* `name` (TEXT, Optional): New name for the workflow (max 128 characters)
* `description` (TEXT, Optional): Workflow description (max 2000 characters)
* `hidden` (BOOLEAN, Optional): Whether the workflow should be hidden from users
* `customStatus` (TEXT, Optional): JSON object for adding/modifying custom status (name, color, group required)
**Output:** Returns the modified workflow details
***
### Query groups
##### `wrike.querygroups`
Get groups information
**Requires Confirmation:** No
**Parameters:**
* `groupId` (TEXT, Optional): Optional - ID of specific group to query
* `fields` (TEXT, Optional): Optional fields to include in the response (comma-separated or array)
* `metadata` (TEXT, Optional): Filter by metadata (JSON object)
* `pageSize` (NUMBER, Optional): Number of groups per page
* `pageToken` (TEXT, Optional): Token for pagination
**Output:** Returns groups information
***
### Update custom field
##### `wrike.updatecustomfield`
Modify an existing custom field
**Requires Confirmation:** Yes
**Parameters:**
* `customFieldId` (TEXT, Required): ID of the custom field to update
* `title` (TEXT, Optional): New title for the custom field
* `type` (SELECT, Optional): Field type (Text, Numeric, Date, etc. - LinkToDatabase not supported for updates)
* `changeScope` (SELECT, Optional): Scope of change application
* `spaceId` (TEXT, Optional): ID of the space to associate with the field
* `sharing` (TEXT, Optional): JSON object with readerIds and writerIds arrays
* `settings` (TEXT, Optional): JSON object with field-specific settings
* `description` (TEXT, Optional): Field description
* `addShareds` (TEXT, Optional): Array of user IDs to share the field with (deprecated, use sharing instead)
* `removeShareds` (TEXT, Optional): Array of user IDs to remove sharing from (deprecated, use sharing instead)
* `addMirrors` (TEXT, Optional): Array of mirror field definitions for LinkToDatabase fields
* `removeMirrors` (TEXT, Optional): Array of mirror field IDs to remove from LinkToDatabase fields
**Output:** Returns the updated custom field details
***
### Update space
##### `wrike.updatespace`
Update an existing space configuration
**Requires Confirmation:** Yes
**Parameters:**
* `spaceId` (TEXT, Required): ID of the space to update
* `title` (TEXT, Optional): New space title
* `description` (TEXT, Optional): Space description
* `accessType` (SELECT, Optional): Space access type
* `membersAdd` (TEXT, Optional): Array of members to add with id and accessRoleId
* `membersRemove` (TEXT, Optional): Array of member IDs to remove from space
* `guestRoleId` (TEXT, Optional): ID of the guest role for public spaces (empty to remove)
* `defaultProjectWorkflowId` (TEXT, Optional): Default workflow for new projects (empty to remove)
* `membersUpdate` (TEXT, Optional): Array of members to update with id and accessRoleId
* `suggestedProjectWorkflowsAdd` (TEXT, Optional): Array of workflow IDs to add as suggested for projects
* `suggestedProjectWorkflowsRemove` (TEXT, Optional): Array of workflow IDs to remove from suggested for projects
* `defaultTaskWorkflowId` (TEXT, Optional): Default workflow for new tasks (empty to remove)
* `suggestedTaskWorkflowsAdd` (TEXT, Optional): Array of workflow IDs to add as suggested for tasks
* `suggestedTaskWorkflowsRemove` (TEXT, Optional): Array of workflow IDs to remove from suggested for tasks
* `withInvitations` (BOOLEAN, Optional): Include invitations in member operations
* `fields` (TEXT, Optional): Optional fields to include in the response
**Output:** Returns the updated space details
***
### Query tasks
##### `wrike.querytasks`
Search through tasks in the account with flexible filtering options
**Requires Confirmation:** No
**Parameters:**
* `folderId` (TEXT, Optional): Optional - Filter tasks by folder/project ID
* `responsibles` (TEXT, Optional): Optional - Array of Contact IDs for assignees filter
* `status` (TEXT, Optional): Optional - Filter by status (Active, Completed, Deferred, Cancelled)
* `importance` (TEXT, Optional): Optional - Filter by importance (High, Normal, Low)
* `startDate` (TEXT, Optional): Optional - JSON date range for start dates (YYYY-MM-DD format)
* `dueDate` (TEXT, Optional): Optional - JSON date range for due dates (YYYY-MM-DD format)
**Output:** Returns matching tasks
***
### Query workflows
##### `wrike.queryworkflows`
Returns list of workflows with custom statuses
**Requires Confirmation:** No
**Parameters:** None
**Output:** Returns list of workflows
***
### Update account metadata
##### `wrike.updateaccountmetadata`
Update account metadata (key-value pairs). Note: This updates metadata only, not subscription settings. Requires admin privileges.
**Requires Confirmation:** Yes
**Parameters:**
* `metadata(see example below)wrike.updateattachment`
Update previously uploaded attachment with new version
**Requires Confirmation:** No
**Parameters:**
* `attachmentId` (TEXT, Required): ID of the attachment to update
* `fileContent` (TEXT, Optional): Optional - File content (base64 encoded or plain text)
* `fileUrl` (TEXT, Optional): Optional - URL to download file from
* `url` (TEXT, Optional): Optional - URL for Wrike to download the file from
* `fileName` (TEXT, Optional): Optional - Name for the updated file
* `contentType` (TEXT, Optional): Optional - MIME type of the file
**Output:** Returns updated attachment details
***
### Update comment
##### `wrike.updatecomment`
Update previously posted comment text
**Requires Confirmation:** No
**Parameters:**
* `commentId` (TEXT, Required): ID of the comment to update
* `text` (TEXT, Required): New comment text
**Output:** Returns updated comment details
***
### Update user
##### `wrike.updateuser`
Update user by ID (Admin access required)
**Requires Confirmation:** Yes
**Parameters:**
* `userId` (TEXT, Required): ID of the user to update
* `profile` (TEXT, Optional): Optional - JSON profile object with accountId and role
* `userTypeId` (TEXT, Optional): Optional - ID of the new user type
* `active` (TEXT, Optional): Optional - Whether user should be active (true/false)
**Output:** Returns updated user details
***
## Common Use Cases
Manage and organize your Wrike data
Automate workflows with Wrike
Generate insights and reports
Connect Wrike with other tools
## Best Practices
**Getting Started:**
1. Enable the Wrike integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Wrike integration, contact [support@langdock.com](mailto:support@langdock.com)
# Zendesk
Source: https://docs.langdock.com/administration/integrations/zendesk
Customer support platform for managing tickets and service requests
## Overview
Customer support platform for managing tickets and service requests. Through Langdock's integration, you can access and manage Zendesk directly from your conversations.
**Authentication:** OAuth\
**Category:** CRM & Customer Support\
**Availability:** All workspace plans
## Available Actions
### Get Article
##### `zendesk.getArticle`
Gets a specific article from the Zendesk Help Center by ID
**Requires Confirmation:** No
**Parameters:**
* `articleId` (TEXT, Required): The unique identifier of the article to retrieve
**Output:** Returns the article details
***
### Get Ticket
##### `zendesk.getTicket`
Gets a specific ticket by ID with basic ticket information
**Requires Confirmation:** No
**Parameters:**
* `ticketId` (TEXT, Required): The unique identifier of the ticket to retrieve
**Output:** Returns the ticket details
***
### Get Ticket Conversation Log
##### `zendesk.getTicketConversationLog`
Gets a specific ticket by ID along with its complete conversation history and logs
**Requires Confirmation:** No
**Parameters:**
* `ticketId` (TEXT, Required): The unique identifier of the ticket to retrieve
**Output:** Returns the ticket with complete conversation history
***
### Find Help Center Articles
##### `zendesk.findHelpCenterArticles`
Find articles in the Zendesk Help Center
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Optional): Search for articles containing specific text
* `locales` (TEXT, Optional): Filter results by language locale
* `categoryIds` (TEXT, Optional): Limit results to articles within a specific category
* `sectionIds` (TEXT, Optional): Limit results to articles within a specific section
* `tags` (TEXT, Optional): Filter articles by their label names
* `updatedAfter` (TEXT, Optional): Filter articles updated after this date
* `updatedBefore` (TEXT, Optional): Filter articles updated before this date
* `sortBy` (SELECT, Optional): Choose how to sort the article results
* `sortOrder` (SELECT, Optional): Choose the direction for sorting results
**Output:** Returns matching articles from the Help Center
***
### Find tickets
##### `zendesk.findtickets`
Finds existing tickets by searching
**Requires Confirmation:** No
**Parameters:**
* `searchQuery` (TEXT, Optional): A text string for full-text search across all searchable properties
* `sortOrder` (SELECT, Optional): Sort order for results
* `sortBy` (SELECT, Optional): Sorting parameter for returned results
**Output:** Returns matching tickets based on search criteria
***
### Find users
##### `zendesk.findusers`
Finds users in Zendesk based on a query string or external ID
**Requires Confirmation:** No
**Parameters:**
* `query` (TEXT, Optional): The search query to find users
* `externalId` (TEXT, Optional): Search for a user by their external ID
* `page` (TEXT, Optional): The page number for pagination
* `perPage` (TEXT, Optional): The number of records to return per page
**Output:** Returns matching users
***
### List users
##### `zendesk.listusers`
Retrieves a list of users from Zendesk, with optional filtering by role, permission set, or external ID
**Requires Confirmation:** No
**Parameters:**
* `role` (SELECT, Optional): Filters the results by a single role
* `roles` (TEXT, Optional): Filters the results by multiple roles
* `permissionSet` (TEXT, Optional): For custom roles on Enterprise plan and above
* `externalId` (TEXT, Optional): List users by external ID
* `page` (TEXT, Optional): The page number for pagination
* `perPage` (TEXT, Optional): The number of records to return per page
**Output:** Returns a list of users
***
### Create Help Center Article
##### `zendesk.createHelpCenterArticle`
Creates a new article in the Zendesk Help Center knowledge base
**Requires Confirmation:** Yes
**Parameters:**
* `sectionId` (TEXT, Required): The ID of the section where the article will be created
* `title` (TEXT, Required): The title of the article
* `body` (MULTI\_LINE\_TEXT, Required): The main content of the article in HTML format
* `locale` (TEXT, Optional): The language/locale for the article
* `userSegmentId` (TEXT, Optional): The ID of the user segment that can view this article
* `permissionGroupId` (TEXT, Required): The ID of the permission group that can edit and publish this article
* `draft` (SELECT, Optional): Whether the article should be created as a draft
* `promoted` (SELECT, Optional): Whether the article should be promoted (featured)
* `position` (TEXT, Optional): The position of this article within its section
* `commentsDisabled` (SELECT, Optional): Whether to disable comments on this article
* `labelNames` (TEXT, Optional): Labels/tags to associate with the article
* `contentTagIds` (TEXT, Optional): Content tag IDs to attach to the article
* `authorId` (TEXT, Optional): The ID of the user who should be credited as the article author
* `notifySubscribers` (SELECT, Optional): Whether to send email notifications to article subscribers
**Output:** Returns the created article details
***
### Create ticket
##### `zendesk.createticket`
Creates a ticket in Zendesk
**Requires Confirmation:** Yes
**Parameters:**
* `subject` (TEXT, Required): The subject line of the ticket
* `description` (TEXT, Required): The main body content of the ticket
* `isPublic` (SELECT, Optional): Determines whether the comment is visible to the end-user/requester
* `priority` (TEXT, Optional): The urgency level of the ticket
* `status` (TEXT, Optional): The current state of the ticket in the support workflow
* `ticketType` (TEXT, Optional): The category of the ticket
* `tags` (TEXT, Optional): Labels applied to the ticket for categorization
* `customFields` (TEXT, Optional): Additional ticket fields specific to the organization's Zendesk configuration
* `requesterId` (TEXT, Optional): The ID of the user who requested the ticket
* `assigneeId` (TEXT, Optional): The ID of the agent the ticket should be assigned to
* `groupId` (TEXT, Optional): The ID of the support group the ticket should be assigned to
**Output:** Returns the created ticket details
***
### Update ticket
##### `zendesk.updateticket`
Updates an existing Zendesk ticket with new information
**Requires Confirmation:** Yes
**Parameters:**
* `ticketId` (TEXT, Required): The ID of the ticket to update
* `status` (SELECT, Optional): The new status of the ticket
* `customStatusId` (TEXT, Optional): The ID of a custom status to apply to the ticket
* `priority` (SELECT, Optional): The new priority level of the ticket
* `type` (SELECT, Optional): The new type of the ticket
* `assigneeId` (TEXT, Optional): The ID of the agent to assign the ticket to
* `groupId` (TEXT, Optional): The ID of the support group to assign the ticket to
* `tags` (TEXT, Optional): The new tags to apply to the ticket
* `customFields` (TEXT, Optional): The new custom field values to apply to the ticket
* `comment` (MULTI\_LINE\_TEXT, Optional): A comment to add to the ticket with this update
* `isPublic` (SELECT, Optional): Determines whether the comment is visible to the end-user/requester
* `authorId` (TEXT, Optional): The ID of the user who is adding the comment
**Output:** Returns the updated ticket details
***
### Reply to ticket
##### `zendesk.replytoticket`
Adds a comment/reply to an existing Zendesk ticket
**Requires Confirmation:** Yes
**Parameters:**
* `ticketId` (TEXT, Required): The ID of the ticket to reply to
* `comment` (MULTI\_LINE\_TEXT, Required): The content of the reply/comment to add to the ticket
* `isPublic` (SELECT, Optional): Determines whether the comment is visible to the end-user/requester
**Output:** Returns the reply comment details
***
## Common Use Cases
Manage and organize your Zendesk data
Automate workflows with Zendesk
Generate insights and reports
Connect Zendesk with other tools
## Best Practices
**Getting Started:**
1. Enable the Zendesk integration in your workspace settings
2. Authenticate using OAuth
3. Test the connection with a simple read operation
4. Explore available actions for your use case
**Important Considerations:**
* Ensure proper authentication credentials
* Respect rate limits and API quotas
* Review data privacy settings
* Test operations in a safe environment first
## Troubleshooting
| Issue | Solution |
| --------------------- | --------------------------------------- |
| Authentication failed | Verify your OAuth credentials |
| Rate limit exceeded | Reduce request frequency |
| Data not found | Check permissions and data availability |
| Connection timeout | Verify network connectivity |
## Support
For additional help with the Zendesk integration, contact [support@langdock.com](mailto:support@langdock.com)
# Agent Configuration
Source: https://docs.langdock.com/product/agents/configuration
There are different tools you can use to configure the agent and tailor it to your specific use case.
For more details you can read our detailed [agent creation guide](/resources/agent-creation) of how to build an agent.
You have the following configuration options to customize your agent:
### Icon, Name, and Description
Short descriptive information to identify the agent and describe how it works to other users.
### Input Type
## Prompt (Default)
There are two ways to send user input to your agent.
The first is the chat input field you already know from normal chat. This lets you send any message to the agent and receive a response.
You can set conversation starters, which are saved prompts users can click instead of writing the first message. These help guide users and reduce the effort needed to get started.
## Form
The other input type is the form input field. Forms collect information in a structured way, similar to a survey tool. This helps guide users to understand how much context is necessary for a quality response and collects it in a standardized format that the model can process more easily.
### Instructions
Describe what you want to achieve with this agent and define clear instructions. Include as many relevant details and background information as possible. This enables the agent to answer better and more closely to your expectations. Check out our [agent creation guide](/resources/agent-creation) and our [prompting guide](/resources/prompt-elements) for more details.
### Knowledge
Attach knowledge and files to your agent. You can either upload files from your computer or attach files from the [integrations](/resources/integrations/using-integrations) to the agent.
### Actions
Actions are capabilities you can give to your agent. These can be capabilities of the chat, like [web search](/product/chat/web-search), [image generation](/product/chat/image-generation), or [data analysis](/product/chat/data-analysis).
But it can also be requests the agent can send to other tools you use, to e.g. create an email draft, update an entry in your CRM or create a support ticket in your ticketing systems. You can find our integrations guides [here](/resources/integrations/using-integrations).
Deep Research is only available in regular chats, not when using agents. To use [Deep Research](/de/product/chat/deep-research), switch to a regular chat session.
### Model
Choose which model this agent will use. For details about choosing the right model, refer to our [model guide](/resources/models).
### Creativity
AI models choose the next word or token using probabilistic calculations. You can influence how much randomness you allow in generation, which affects creativity levels.
### Sharing
In the top right corner, you'll find options to share and use the agent. You can share it with anyone in the workspace or assign editing and usage permissions to specific groups or individuals.
### Usage Insights
Click the three dots to find usage insights. This section helps agent creators optimize based on user feedback.
There are two sections: Analytics shows quantitative insights like user numbers, messages, and conversations over time. The feedback tab lets you browse through user reactions (likes/dislikes) and comments about what needs improvement. Feedback is only shared with the agent creator when users actively choose to share it.
# Introduction to Agents
Source: https://docs.langdock.com/product/agents/introduction
Agents are specialized chatbots you can configure for specific use cases or documents. They work like regular chat, but with saved context (documents and instructions) so you don't need to set up the same conversation repeatedly.
We renamed Assistants to Agents. All features and configurations remain the same. You can find more information [here](https://www.langdock.com/changelog#assistants-are-now-agents).
## Purpose of agents
Chat works great for one-time or quick requests. Agents are better when you want to share your setup with others or handle recurring tasks efficiently.
Agents can:
* Work with specific documents that stay attached to the conversation
* Follow custom instructions to behave like a specialized chatbot for your use case
* Be shared with colleagues to streamline everyone's workflow
We have created a whole library of agents our customers set up and use every day. You can find it [here in our resource section](/resources/agent-templates).
## Internal agents
Most agents are focused on internal usage to improve internal processes. Your team can use agents in three ways:
* **Through the platform:** You can use/share agents and chat with them in the platform interface.
* **Via Slack:** You can add the Slack App to your Slack account and use the Agent from within Slack. Our [guide for the Slack integration](/resources/chatbots/slack) describes this integration in detail.
* **Via Teams (coming soon):** Similar to the Slack integration, we will release a Teams app that can be added to your Teams account so users can directly chat with agents in Teams.
## External agents
External agents will be shareable outside of Langdock. As no Langdock license is required to use them, this feature will have usage-based pricing, similar to the pricing of model providers like OpenAI, Anthropic, etc.
* **Via API:** For developer use cases and other situations, it makes sense to use attached documents but send messages to an agent outside the interface. This will allow you to build your chatbots for internal and external communication with your own interface.
# Agent Usage Insights
Source: https://docs.langdock.com/product/agents/usage-insights
Agent Usage Insights shows data on agent usage and user feedback. Use this page to review activity metrics, analyze feedback, and export data for further analysis.
## Usage Insights
Usage Insights provides comprehensive data about how your agent performs and how users interact with it. This section helps you optimize your agent based on both quantitative metrics and qualitative user feedback. Access Usage Insights by clicking the three dots menu next to your agent's Share button and selecting **Usage Insights**.
Both Analytics and Feedback data are available to agent creators and editors, giving you the insights needed to continuously improve your agent's effectiveness.
### Analytics
The Analytics tab provides quantitative insights into your agent's usage patterns and adoption metrics. This data helps you understand user engagement and identify trends over time.
#### Overview Metrics
You can view three key all-time statistics for your agent:
* **All-time users**: Total number of unique users who have interacted with your agent
* **All-time conversations**: Total number of conversation sessions initiated with your agent
* **All-time messages**: Total number of messages exchanged with your agent
#### Custom Timeframe Analysis
For more detailed insights, you can set custom timeframes to analyze specific periods:
1. **Daily Active Users**
2. **Conversations per Day**
3. **Messages per Day**
Use custom timeframes to compare performance before and after making changes to your agent's configuration, helping you measure the impact of your improvements.
***
### Feedback
Feedback is automatically enabled for all agents with no additional setup required from you as the creator. This built-in system helps you understand how users interact with your agent and identify areas for improvement.
Feedback collection runs automatically in the background. Users can provide feedback on any agent response without interrupting their workflow.
#### How Users Provide Feedback
When interacting with your agent, users have several options to share their experience:
* **Quick Rating**
Users can rate any agent response with a simple thumbs up or thumbs down directly in the chat interface.
* **Optional Contact Information**
Users can choose to share their name and email address with their feedback, making it easier for you to follow up on specific issues or suggestions.
* **Chat Sharing** Users can decide to share the entire chat conversation with you. This works similarly to sharing a chat with a colleague, but only you as the creator of the agent will have access to view it.
* **Detailed Comments**
Users can add written comments explaining their feedback, providing specific context about what worked well or what needs improvement.
#### Accessing Your Feedback
You can review all feedback submissions through the Usage Insights section:
1. Open your agent
2. Click the three dots menu
3. Select **Usage Insights**
4. Navigate to the **Feedback** tab
The Feedback tab organizes submissions into positive and negative categories, making it easy to identify patterns and prioritize improvements.
In the feedback section, you'll see:
* **Positive and negative feedback** organized separately
* **User comments** when provided
* **Shared chat conversations** for detailed context
* **Contact information** when users choose to share it
### Data Export
Both Analytics and Feedback data can be exported separately as CSV files for external analysis, reporting, or record-keeping purposes.
* **Export Analytics**
Download quantitative usage data including user metrics, conversation counts, and message volumes for your specified timeframe.
* **Export Feedback**\
Download qualitative feedback data including ratings, comments, and user contact information when shared.
# Actions in Chat
Source: https://docs.langdock.com/product/chat/actions-in-chat
Access any integration, Agent, Knowledge Folder, or saved prompt directly in chat using the @ symbol for seamless workflow integration.
# Actions in Chat
Actions in Chat transforms your Langdock experience by bringing all your tools directly into any conversation. Instead of switching between different interfaces, you can access integrations, Agents, Knowledge Folders, and saved prompts with a simple @ symbol, creating a unified Chat where everything you need is just a mention away.
Actions in Chat works in both regular chats and Agent conversations, giving you consistent access to your tools regardless of where you're working.
## How Actions in Chat Works
Type `@` in any chat input field to open the actions menu. This works in both regular chats and when conversing with specific Agents.
The @ symbol acts as your universal access point to all Langdock resources, making it easy to remember and use consistently.
Start typing to search through your available options, or browse through the first 20 items displayed. The search covers:
* **Integrations**: Connect to external services and APIs
* **Agents**: Access specialized AI helpers you've created or shared with you
* **Knowledge Folders**: Reference specific document collections
* **Saved Prompts**: Use templates from your prompt library
Only the first 20 matching results appear initially. Type specific names to narrow down results and find exactly what you need.
Either press Enter to select the highlighted option or click directly on the tool you want to use. The chat interface will update to show your selection.
Once added, you'll see the tool's logo and name displayed in blue within the chat interface, confirming the active connection.
The visual indicator ensures you always know which tools are active in your current conversation.
## Adding Multiple Tools
After adding your first integration or tool, a **+** button appears in the chat interface. Click this button to add additional tools to the same conversation.
This approach works well when you know you need multiple specific tools for a complex task.
You can type `@` again at any point in the conversation to add more tools. This method feels natural when you discover you need additional resources mid-conversation.
Both methods give you the same functionality, so choose whichever feels more intuitive for your workflow.
## Key Differences from Agent Integrations
**Programmatic Control**: Integrations are pre-configured with specific actions and workflows defined during Agent creation.
**Predictable Behavior**: The Agent knows exactly which actions to call and when, based on your setup.
**AI-Driven Selection**: The model analyzes your request and chooses appropriate actions from available integrations dynamically.
**Flexible Access**: You have the same integration capabilities as Agents, but with real-time decision making.
Actions in Chat can only access integrations and tools that you could also use when creating an Agent. The available actions are determined by your permissions and the integrations configured in your workspace.
## Cross-Context Usage
Actions in Chat works seamlessly across different conversation types:
* **Regular Chats**: Access any tool to enhance your standard conversations
* **Agent Conversations**: Add integrations or call other Agents while chatting with a specific Agent
* **Mixed Workflows**: Combine multiple Agents, integrations, and knowledge sources in a single conversation
This cross-context functionality is particularly powerful for complex workflows where you might need to consult multiple specialized Agents or access different data sources within the same conversation thread.
## Troubleshooting
**Check permissions**: Ensure you have access to the integration, Agent, or Knowledge Folder you're looking for.
**Verify spelling**: Double-check the name you're typing matches the actual resource name.
**Try broader terms**: If searching for a specific name doesn't work, try searching for related keywords.
**Confirm integration status**: Check that the integration is properly configured and active in your workspace.
**Review permissions**: Ensure the integration has the necessary permissions to perform the requested actions.
**Check connection**: Some integrations may require re-authentication or have temporary connectivity issues.
## Next Steps
Now that you understand Actions in Chat, explore these related features:
* **[Creating Integrations](/resources/integrations/create-integrations)**: Build custom integrations for your specific workflow needs
* **[Agent Creation](/resources/agent-creation)**: Design specialized Agents that work seamlessly with Actions in Chat
* **[Knowledge Folders](/resources/integrations/knowledge-folders)**: Organize your documents for easy access via @ mentions
Actions in Chat creates a unified interface where all your Langdock tools work together, eliminating the need to switch contexts or remember different access methods.
# Canvas for development
Source: https://docs.langdock.com/product/chat/canvas-for-development
Canvas gives you a dedicated editing screen alongside your chat where you can iterate on code or text much more easily. This page focuses specifically on Canvas for development tasks like adding debug logs, fixing bugs, or generating new code from scratch.
## How to select Canvas
To use Canvas, open a new chat and select a model that supports Canvas. You'll see Canvas availability indicated by icons in the model selector if the Canvas icon is greyed out, that model doesn't support Canvas.
Canvas works best with GPT-4.1 at the moment.
When using models that support Canvas, activate it by clicking the Canvas button in the input bar at the bottom of your screen.
Once you activate Canvas, the button will highlight to confirm it's active, and you can begin creating and editing in Canvas.
## How to use Canvas
Canvas automatically opens on the right side after you send your prompt and the AI generates a response. You now have several options to continue:
### In-line editing
You can edit the output directly in the Canvas editor on the right. You can type and modify text manually, or select specific sections to have the model reformat or rewrite just those parts.
### Previewing and Running code
You can now execute your code directly in Canvas! This means you can preview React components or run Python scripts without switching back to your IDE.
Look for the "Run Code" button or "Preview" toggle in the top right corner. Click either one to execute your code instantly.
This works because Canvas now includes a sandboxed runtime environment that can handle both React rendering and Python execution, making your development workflow much smoother.
### Optimizing the whole text generally
By clicking this button, the model will analyze your code structure and add inline comments explaining the logic, function purposes, and complex operations throughout your codebase.
***
By clicking this button, the model will automatically insert console.log statements at key execution points to help you debug your code.
By clicking this button, the model will analyze your code for common issues (syntax errors, logic bugs, security vulnerabilities) and automatically apply fixes where possible.
By clicking this button, the LLM will analyze your code structure, and provide specific suggestions for improvement (like refactoring opportunities, performance enhancements, or best practices).
### Versioning
On the top right, you'll find a version selector that automatically saves each iteration as you refine your text. Navigate between versions to compare changes and restore any previous version that better fits your needs.
### Copy and Export
When you're ready to use your content, click the copy button (top right) to copy the entire text to your clipboard for pasting directly into your IDE or elsewhere.
Alternatively, click "Export" to download the file in its native format (.py for Python scripts, .html for HTML files, .js for JavaScript, etc.).
### Multiple Canvas in one Chat
You can open multiple Canvas instances in one chat session. Switch between them using the Canvas selector (top right, next to the share button) or scroll to find the specific Canvas in your chat history. Text Canvas display a the Canvas icon, while Dev Canvas show a code icon for easy identification.
# Canvas for writing
Source: https://docs.langdock.com/product/chat/canvas-for-writing
Canvas gives you a dedicated editing screen alongside your chat where you can iterate on code or text much more easily. This page focuses specifically on the Canvas writing features for tasks like drafting emails, documents, or any text content.
## How to select Canvas
To use Canvas, open a new chat and select a model that supports Canvas. You'll see Canvas availability indicated by icons in the model selector if the Canvas icon is greyed out, that model doesn't support Canvas.
Canvas works best with GPT-4.1 at the moment.
Models that support Canvas automatically launch the feature when it is considered suitable. If you want the model to explicitly use the tool, activate it by clicking the Canvas button in the input bar at the bottom of your screen.
Once you activate Canvas, the button will highlight to confirm it's active, and you can begin creating and editing in Canvas.
## How to use Canvas
Canvas automatically opens on the right side after you send your prompt and the AI generates a response. You now have several options to continue:
### In-line editing
You can edit the output directly in the Canvas editor on the right. You can type and modify text manually, or select specific sections to have the model reformat or rewrite just those parts.
### Optimizing the whole text generally
At the bottom right corner you'll find three buttons to optimize your text:
By clicking this button, the entire text in Canvas will be expanded with additional detail and context.
By clicking this button, the entire text in Canvas will be condensed to its key points.
By clicking this button, the AI will analyze your text and provide improvement suggestions.
### Formatting
You can manually format text in Canvas using the formatting toolbar in the top right. Available options include **bold**, *italic*, headings (H1-H3), and lists (bulleted and ordered).
### Editing in the regular chat
You can also give specific instructions in the regular chat if these formatting options don't cover what you need.
### Versioning
On the top right, you'll see a version selector that lets you view previous iterations of your text. You can navigate between versions and restore whichever one works best for your needs.
### Copy and Export
When you're ready to use your content, you'll find a copy button on the top right that copies the entire text to your clipboard for pasting elsewhere. You can also choose from three export formats (PDF, Word, or Markdown) that will download your Canvas content in your preferred file format.
### Multiple Canvas in one Chat
You can open multiple Canvas instances in one chat session. Switch between them using the Canvas selector (top right, next to the share button) or scroll to find the specific Canvas in your chat history. Text Canvas display a the Canvas icon, while Dev Canvas show a code icon for easy identification.
# Data Analysis
Source: https://docs.langdock.com/product/chat/data-analysis
To process tabular files like CSVs, Google Sheets or Excel Sheets, you can use the data analyst / python tool. Here is how to use it.
The **more context and details** you add, the **better your response** because the model understands precisely what you expect. Do not miss our [Prompt Engineering Guide](/resources/prompt-elements) to learn how to write great prompts.
The data analysis tool in Langdock enables users to (among other things) read and process CSV files, Excel or Google Sheets.
You can use the data analysis tool to:
* Read tabular data (CSVs, Excel sheets, and Google Sheets)
* Perform mathematical operations, e.g., finding correlations, defining distributions or deviations, etc.
* Create graphs and charts depicting data
* Generating new files (Excel, CSV, PowerPoint, Word, etc.)
Describe what you're trying to accomplish in the chat. Try to be as specific as possible.
## How it works
1. The data analyst is a tool the model can choose. It gets triggered when you prompt the model to use it ("use the data analyst") or when the according file is uploaded (GSheets, CSVs, Excel files). Here is an example of a file we will use in Langdock:
To receive the best results, please use **GPT-4o** and ensure that the **column titles are in the first row**.
2. The model then generates Python code. Python is a programming language that can be used to analyze datasets and extract information. In the dark code block at the top you can see the generated Python code to analyze our example file:
3. After the code has been generated, a separate instance runs the Python code and returns the result to the model. It is shown under the code block in the screenshot above.
4. The model uses the prompt and the result to answer the user's question. In our example, this looks like this:
5. If you request a file or a diagram, the model generates code again to generate the file and executes it afterwards. The generated file or diagram is then displayed in the chat and can be downloaded.
## Differences to other documents
The normal document search and the data analyst are different functionalities for different tasks with advantages and disadvantages. The document search is good at understanding a **whole document content**. It is not good at processing tabular data.
The data analyst **can not understand the entire file**, but only the part that is extracted with Python. Everything else in the file has not been considered for the response. But this makes it powerful in working with large data sets and tabular data, as well as performing mathematical operations.
## Best practices and troubleshooting
* In order to parse the file correctly, all column titles should have a descriptive name. When referring to the column name, ideally use the full column title and not “Column K”. This is relevant as the AI model creates Python code which can only reference the correct column if the name is the same. Giving the same column name reduces the risk of letting the model generate code that references an incorrect column.
* Make sure to enable the data analysis functionality in your settings and (if you are using a sheet in an agent) also in the capabilities section at the bottom of the agent editor.
* Try to describe what you expect as precisely as possible. You can use the prompt elements from our prompt engineering guide (especially task, context, response format)
* If possible, avoid empty cells in a sheet.
* When you expect complex operations and receive no result or incorrect results, try to break the instruction into different prompts.
# Deep Research
Source: https://docs.langdock.com/product/chat/deep-research
Deep Research creates comprehensive, citable reports by conducting strategic web searches and synthesizing findings from multiple sources.
## What is Deep Research?
Deep Research tackles complex research projects by intelligently planning multiple strategic web searches across different angles, then synthesizing findings into comprehensive reports with proper citations. It's designed for when you need thorough investigation rather than quick answers.
## When to use Deep Research
Deep Research is particularly powerful for:
* **Background research** - Comprehensive overviews of topics, companies, or industries
* **Market analysis** - Understanding market trends, sizing, and competitive landscapes
* **Competitive analysis** - In-depth competitor research and positioning
* **Academic research** - Literature reviews and multi-source academic investigations
* **Strategic planning** - Research to inform business decisions and strategy
* **Industry trends** - Understanding emerging trends and their implications
Use Deep Research when you need thorough, well-documented analysis rather than quick facts or casual conversation. The resulting report with citations can be downloaded as a PDF, saving you hours of manual research and compilation.
## How Deep Research works
1. **Intelligent planning** - Deep Research analyzes your query and creates a strategic research plan
2. **Multi-source searching** - It conducts multiple web searches from different angles to gather comprehensive information
3. **Real-time visibility** - You can watch the search activity in real-time and see sources as they are added
4. **Synthesis and analysis** - All findings are analyzed and synthesized into a structured report
5. **Citation and documentation** - Every claim is properly cited with source links for verification
No matter which model you select in the chat, Deep Research always uses pre-configured models (specifically optimized for research tasks) to ensure the best possible quality and consistency.
## Usage limits
Deep Research has a usage limit of **15 searches** **per user per month** to ensure optimal performance for all users. This limit resets monthly and applies across all workspaces.
## Getting started
Deep Research is now available across all workspaces. To use it:
### 1. Select "Deep Research" from the tools in the chat
### 2. Enter your research query
Be specific about what you need (e.g., "Compare pricing models for SaaS platforms" vs. "Tell me about SaaS")
### 3. Answer the Follow up questions
### 4. Watch as Deep Research conducts its investigation in real-time
And see what sources are being read at which moment.
### 5. Review the comprehensive report with citations
View the Activity of the Deep Research.
And inspect all sources of the report.
### 6. Download as PDF if needed for sharing or offline reference
### Deep Research vs. Regular Chat
| | **Deep Research** | Regular Chat |
| --------- | --------------------------- | --------------- |
| Speed | 5-30 minutes | Instant |
| Sources | Multiple strategic searches | Limited |
| Output | Structured reports | Conversational |
| Citations | Comprehensive | Basic |
| Best for | In-depth analysis | Quick questions |
Deep Research transforms how you approach complex research tasks, providing the depth and rigor of manual research with the efficiency of AI automation.
# Document Search
Source: https://docs.langdock.com/product/chat/document-search
To access specific knowledge in files, you can use the document search tool. Simply upload or connect a file and you will take the document into account.
The **more context and details** you add, the **better your response** because the model understands precisely what you expect. Do not miss our [Prompt Engineering Guide](/resources/prompt-elements) to learn how to write great prompts.
Document search is one tool the AI models have. You can add documents by either uploading them, through drag and drop or by selecting a file from an integration (how to set up integrations can be read [here](/resources/integrations/using-integrations).
When you attach a document to a chat, document search automatically kicks in. Document search extracts text from files, such as PDFs, Word documents, or PowerPoint presentations. You will find a list of all supported file types [here](/resources/faq/supported-file-types).
The document's text gets sent to the AI model along with your prompt, allowing it to answer questions based on the actual content.
**Use cases for working with documents are:**
* Summarizing texts
* Asking questions about a document
* Analyzing the files
**Current limitations of document search:**
* Table data extraction isn't reliable yet. For tables, try our [Data Analyst](/product/chat/data-analysis) with CSV/Excel files, Google Sheets integration, or screenshot the table and work with the image instead.
* Images, graphs, and non-text symbols in the documents can't be processed
# Basic Chat functionalities
Source: https://docs.langdock.com/product/chat/functionalities
To select a model and interact with the messages and responses in the chat, there are a few functionalities in the chat. These are the model selector in the top left corner and the buttons to interact with prompts and responses.
## Model Selector
Langdock is model-agnostic, integrating the best AI models in a GDPR-compliant
EU setting - regardless of provider.
* **GPT-5 family** - GPT-5, GPT-5 mini, GPT-5 nano & GPT-5 Chat -
**GPT-4.1** - OpenAI's 2024 flagship large model
* **Claude Sonnet 4** - Anthropic's reasoning-focused model
- **Gemini 2.5 Pro** - Google's multimodal model
See our complete [model guide](/resources/models) for the full list of
available models and their specific capabilities.
To choose a model for your next response:
1. Click the model name in the top-left corner of the chat window
2. Pick the model that best fits your task - your next message will use it automatically
You can learn more about the current models in Langdock in our [model guide](/resources/models).
## Prompt functionalities
Hover over your prompt to see available actions.
### Edit prompt
If you're unsatisfied with a response, edit the prompt by clicking the pen icon. Click save when ready. It's normal to iterate on your instructions when using AI models and add more context where needed.
### Save prompt
To reuse a prompt, save it by clicking the "+" icon. Choose a folder in your prompt library and give your prompt a name. Learn more about the prompt library [here](/product/chat/prompt-library).
## Response functionalities
If you move the cursor of your mouse over the response, you can see the functionalities connected to the response.
### Copy response
Click the copy button to save the response to your clipboard, then paste it into other tools and applications.
### Regenerate response
If you're unsatisfied with a response, click the circular arrow to regenerate it. This works because AI models are non-deterministic, meaning you'll get slightly different results for the same prompt. You can compare multiple responses and pick the best one. If responses are still insufficient, try editing your prompt with more specific details.
# Image Analysis (Vision)
Source: https://docs.langdock.com/product/chat/image-analysis
A few models are capable of processing images and taking them into account for their answer generation. This works because these models have multimodal capabilities, meaning they can understand both text and visual content simultaneously. You can use this to extract text from documents, describe what's in images, or analyze visual data.
The **more context and details** you add, the **better your response** because the model understands precisely what you expect. Do not miss our [Prompt Engineering Guide](/resources/prompt-elements) to learn how to write great prompts.
Apart from uploading text files, you can also upload images (JPG, PNG) to the chat and let the model analyze them. This capability is called “vision”. The following models support it:
* GPT-5
* GPT-5 mini
* GPT-5 nano
* GPT-5 Chat
* GPT-4.1
* GPT-4.1 mini
* GPT-4.1 nano
* GPT-4o
* GPT-4o mini
* o1
* o3
* o4 mini
* Claude Sonnet 4.5
* Claude Sonnet 4
* Claude 3.7 Sonnet
* Claude 3.5 Sonnet
* Gemini 2.5 Pro
* Gemini 2.5 Flash
Image analysis is limited to images uploaded in the chat and not available in uploaded PDFs or presentations yet.
# Image Generation
Source: https://docs.langdock.com/product/chat/image-generation
To generate images based on your text input, you can use the image generation tool. Here, the model you selected sends a prompt to an image generation model from our providers, which are specifically built for image generation.
The **more context and details** you add, the **better your response** because the model understands precisely what you expect. Do not miss our [Prompt Engineering Guide](/resources/prompt-elements) to learn how to write great prompts.
The image models currently available in Langdock include Flux1.1 Pro Ultra and Flux.1 Kontext from our partner Black Forest Labs. Additionally, you can access Imagen 4, Imagen 4 Fast, and Gemini 2.5 Flash Image (Nano Banana) from Google, as well as DALL-E 3 and GPT Image 1 from OpenAI.
Image generation uses the following steps:
1. You can use image generation in Langdock via the "Image"-button in the chat field. This will use the default image model. You can also select a different model using the selector in the button.
2. The chat model will then choose the image generation tool and writes a prompt to the image model in the background.
3. The image model generates the image based on the prompt and returns it to the main model and you as the user.
You can select any language model for image generation. Each model sends prompts to the underlying image generation model differently, so feel free to try different models and see how the generated images differ.
Here's a known limitation we're working on:
* **Text in images has mistakes / is written in non-existing letters:**
This happens because the models are trained on real images that included text. The model generates objects that look similar to what it learned, but it can't write full, correct sentences yet. Instead, it tries to mimic letters from the alphabet, leading to incorrect spelling or non-existing letters. This is a current limitation of image generation models that OpenAI is actively improving in upcoming versions.
# Memory
Source: https://docs.langdock.com/product/chat/memory
Memory offers deeper personal customization of the model's behavior, by allowing them to remember information from past interactions.
## What is Memory?
Memory is a way to store information in the model's context. This information can be used improve the model's responses to questions or to perform tasks in the future. Additionally, it allows you to carry over context from one conversation to the next.
Some examples of how you can use memory:
* Remember certain details about your job
* Share a preference for a specific style of writing
* Remember your name and other personal details
All your memories are stored in your account, and are available to you in all your chats. However, memories are not available when using Agents.
### Usage
To use memory, you need to enable it in your settings. Go to your account settings, then the "Preferences" tab. There you can enable chat memory in the capabilities section.
### Inspect and edit memories
You can view and manage all your memories by going to the "Memory" tab in your account settings. There you'll also find an edit option if you want to adjust the wording without having to go through the memory creation process again.
### Limitations
You can store a maximum of 50 memories at a time. If you want to store more, you can delete some of the older memories by visiting the "Memory" tab in your account settings.
# Mermaid Diagrams
Source: https://docs.langdock.com/product/chat/mermaid
Create interactive flowcharts, process diagrams, and visualizations with AI-generated Mermaid code in Langdock
# Generating Mermaid Diagrams
Transform your ideas into clear, interactive diagrams using Langdock's AI-powered Mermaid generation. Whether you need flowcharts, process diagrams, or system architectures, our models create precise Mermaid code that renders instantly in an interactive frame.
## How Mermaid Generation Works
When you request a diagram, Langdock's AI models analyze your requirements and generate Mermaid syntax that renders immediately in your chat. The process is conversational and iterative, allowing you to refine your diagram until it perfectly matches your vision.
Describe the diagram you want to create. Be specific about the type (flowchart, sequence diagram, etc.) and include key elements you want to visualize.
Create a flowchart showing the user authentication process
The more specific your request, the better the initial result. Include details like decision points, process steps, and relationships between elements.
The model creates Mermaid syntax based on your description. If your request is unclear, the model may ask follow-up questions to ensure accuracy.
Different models may interpret your request slightly differently, so feel free to try various models if the first result doesn't match your expectations.
Once the Mermaid code is generated, a new frame opens in your chat displaying the rendered diagram with full interactivity.
You'll see your diagram rendered immediately with zoom controls and navigation options available.
## Interacting with Your Diagram
The diagram frame provides several interaction options to help you examine and work with your visualization.
### Navigation Controls
#### Top-left corner icons:
* **Zoom In** : Magnify specific parts of your diagram for detailed examination
* **Zoom Out** : Get a broader view of complex diagrams
* **Reset View** : Return to the original zoom level and position
Use zoom controls when working with large, complex diagrams to focus on specific sections without losing context.
#### Navigate within the frame:
* **Click and drag**: Move around large diagrams to examine different sections
* **Responsive interaction**: The diagram responds immediately to your movements
### Export and Sharing Options
The top-right corner of the frame contains three export options:
Download a PNG file of your diagram, perfect for presentations, documentation, or sharing with stakeholders.
Copy the raw Mermaid syntax to your clipboard for use in other tools, documentation systems, or version control.
Save the diagram as a **.mermaid** file for editing in specialized tools or integration with development workflows.
## Refining Your Diagram
If your diagram doesn't match your vision exactly, you can easily request modifications through natural conversation.
* **Color changes**: "Make the decision nodes blue and the process steps green"
* **Text adjustments**: "Change 'User Login' to 'Authentication Process'"
* **Structure modifications**: "Add a step for password validation before the success path"
* **Style updates**: "Use rounded rectangles instead of sharp corners"
* **Layout improvements**: "Arrange the nodes vertically instead of horizontally"
Tell the model exactly what you want to modify. Be specific about colors, text, structure, or layout changes.
Change the color of the error handling boxes to red and add a retry loop
The AI processes your feedback and creates updated Mermaid code incorporating your requested changes.
A fresh diagram frame appears with your modifications, ready for further interaction or export.
The updated diagram maintains all previous elements while incorporating your specific changes.
Complex diagrams with many elements may take a moment to render. The interactive frame will appear once the diagram is fully processed.
# Projects
Source: https://docs.langdock.com/product/chat/projects
Group related chats with shared files and custom instructions for better workflow organization
# Projects
Projects provide a flexible way to organize your chats around specific contexts, making it easier to work on larger initiatives while keeping all related conversations together.
## What are Projects?
Projects are containers that group related chats in a specific context. Whether you're working on a marketing campaign, product launch, or research initiative, Projects help you:
* **Group related chats** - Keep all conversations about a specific topic or initiative together
* **Share files across chats** - Attach documents once and access them in all project chats without re-uploading
* **Customize AI behavior** - Set project-specific instructions that apply to all chats within the project
* **Maintain context** - Work on larger projects without losing track of relevant information
## How Projects Work
Projects enhance your workflow through shared context:
1. **Create a project** for your specific purpose (e.g., "Q4 Marketing Campaign" or "Product Research")
2. **Upload relevant documents** that will be accessible across all project chats
3. **Set custom instructions** to guide the AI's behavior for this specific project
4. **Start chatting** with full access to the project context and files
## Creating a Project
### From the sidebar
1. **If you have existing projects**: Hover over the "Projects" header in the sidebar to reveal a **+** button, then click it
2. **If you have no projects yet**: Click the "New project" item that appears in the sidebar
### Using keyboard shortcuts
Press `⌘/Ctrl + K` to open the command palette and type "new project"
## Setting Up Your Project
When creating a project, you can configure:
### Project basics
* **Name**: Choose a descriptive name that identifies the project's purpose
* **Description** (optional): Add context about the project's goals or scope
### Attached files
Upload documents, spreadsheets, presentations, or other files that are relevant to your project. These files will be automatically available in all chats within the project - no need to upload them repeatedly.
### Custom instructions
Define project-specific instructions that customize how the AI responds within this project's chats. For example:
* "Always use formal business language for this client project"
* "Focus on technical accuracy and include code examples"
* "Summarize responses in bullet points for easy scanning"
## Working with Projects
### Starting a chat in a project
Once your project is set up, any new chat you create within it will automatically:
* Have access to all project files
* Apply your custom instructions
* Be grouped with other project chats for easy navigation
### Managing project content
* All chats within a project share the same context
* You can update files or instructions at any time
* Changes to project settings apply to all future chats
## Sharing Projects
Share entire projects with your team to collaborate effectively. When you share a project, team members get access to all chats, files, and custom instructions within that project.
Sharing a project will also share all chats inside it. New chats added to the project will automatically be shared with the same people or groups.
### Sharing a project
1. Open your project
2. Click the **Share** button in the top right corner
3. Search for users or groups within your workspace
4. Select the permission level for each collaborator
5. Optionally enable email notifications with a custom message
### Who you can share with
Projects can be shared with:
* **Individual users** within your workspace
* **Groups** within your workspace (you must be an editor or admin of the group to share with it)
Projects cannot be shared with the entire workspace or with API keys.
### Viewing shared projects
When someone shares a project with you, it automatically appears in your sidebar under the "Projects" section alongside your own projects. Additionally, each chat card in the project overview shows the chat owner's name and profile picture, so you can easily see who created each chat.
### Permission levels
Projects have three permission levels:
| Permission | Owner | Editor | User |
| -------------------------------------- | ----- | ------ | ---- |
| View project settings and instructions | ✓ | ✓ | ✓ |
| Read all chats in the project | ✓ | ✓ | ✓ |
| View attached files | ✓ | ✓ | ✓ |
| Add chats to the project | ✓ | ✓ | ✓ |
| Remove own chats from the project | ✓ | ✓ | ✓ |
| Manage files and instructions | ✓ | ✓ | ✗ |
| Share project with others | ✓ | ✓ | ✗ |
| Delete project | ✓ | ✗ | ✗ |
* **Owner**: The person who created the project. Has full control including the ability to delete the project.
* **Editor**: Can chat and edit knowledge and instructions.
* **User**: Can chat and see knowledge and instructions.
When sharing a project, you can assign either **Editor** or **User** permissions to collaborators.
### Chat ownership
Regardless of project permissions, you always retain full control over your own chats:
* Only you can edit, rename, or delete chats you created
* Your chats remain yours even if the project is deleted
* All chats added to a shared project are visible to all project members
### Filtering shared content
To help you manage shared projects, you can filter chats within a project:
* **All** - Show all chats
* **By you** - Show only chats you created
* **By others** - Show only chats created by team members
## Best Practices
* **One project per initiative** - Create separate projects for distinct workflows or clients
* **Keep files updated** - Regularly review and update project files to maintain relevance
* **Use specific instructions** - Tailor AI behavior to match the project's communication style and requirements
* **Archive completed projects** - Keep your workspace organized by archiving finished projects
## Use Cases
Projects are particularly useful for:
* **Marketing campaigns** - Group all campaign-related chats with brand guidelines and assets
* **Research projects** - Keep research documents and discussions organized together
* **Client work** - Maintain separate contexts for different clients with their specific requirements
* **Product development** - Organize feature discussions with relevant specifications and documentation
# Prompt Library
Source: https://docs.langdock.com/product/chat/prompt-library
If you've developed a good prompt and want to reuse it, the prompt library lets you store, edit, and share it with others. You can use saved prompts in any chat or with agents. All prompts are organized in folders that you can keep private or share with your workspace.
## Adding a prompt to the prompt library
To reuse a prompt you've written, hover over it in any chat and click the **+** icon to save it. You'll need to choose a folder and give it a name.
Alternatively, go directly to the library and click **"Add Prompts"** in the upper right corner. Here you can enter the prompt's name, the actual prompt text, and specify where to save it.
You can add variables to make prompts more flexible using `{{ }}` with your variable name between the brackets. This creates placeholders that you (or others) can customize for specific situations. You can also click the button in the bottom left corner to add variables.
## Using a prompt in a chat:
To use prompts from the library, click on any prompt to open a new chat with it already loaded in the input field. You can then modify it or fill in any placeholders.
Another way is to type `@` in any chat. This opens a list of all your available prompts. Click on one you want to use, or start typing to filter the list.
When using a prompt with variables, you'll be prompted to fill in the specific values for your situation.
Advanced tip: You can combine agents with prompts from the library. If you have an agent that follows a series of steps, save those steps as prompts in the library and execute them one by one in the agent chat.
# Web Search
Source: https://docs.langdock.com/product/chat/web-search
AI models have knowledge cutoffs because they can't learn new information after training. To access current information or web results, you can use the Web Search tool.
The **more context and details** you add, the **better your response** because the model understands precisely what you expect. Do not miss our [Prompt Engineering Guide](/resources/prompt-elements) to learn how to write great prompts.
Web search solves a core technical limitation of AI models. Large Language Models go through two phases: training (when they're "built") and then deployment (when you use them). Once training is complete, the model's knowledge is frozen at that cutoff date and can't be updated. This means even the newest models become outdated the moment they're released.
The web search tool bridges this gap in two steps:
1. **Search**: A query is generated and searches the internet for relevant results
2. **Context**: Those results get sent to the AI along with your prompt to generate an informed answer
**Perfect for:**
* Gathering current information on any topic
* Getting real-time data and recent developments
* Searching specific websites (just include the URL in your prompt)
Want to understand more about how AI training works? Check out our [guide about how AI works](/resources/basics).
## Selecting web search
To use Web search, open a new chat and select a model that supports web search. You’ll see web search availability indicated by the icon in the model selector if the web search icon is greyed out, that model doesn’t support web search.
When using models that support web search, activate it by clicking the web search button in the input bar at the bottom of your screen.
Once you activate web search, the button will highlight to confirm it’s active, and you can begin searching the internet or accessing websites using web search.
## Using web search
Web search automatically triggers when the AI detects that your prompt requires information beyond its training cutoff date.
Your then prompt gets reformatted into an optimized search query, then our search model finds relevant results across the web. Once the search is complete, the model analyzes findings from multiple websites and synthesizes them into a comprehensive answer for you.
The whole process happens seamlessly in the background, so you get current information without any extra steps on your end.
### Inspecting sources
When you want to see what websites the AI used to write the response, click on *Searched for "your search query"* to view the complete list of websites that were analyzed.
On this view, you can see the citations and search results from your web search.
When you hover over a citation, it highlights exactly which paragraph quoted that specific website. Below the main response, you'll find all the other search results that were analyzed during the search but didn't make it into the final answer.
This gives you full transparency into both what sources were used and what additional information was considered but not included.
# Agent-to-Agent Protocol (A2A)
Source: https://docs.langdock.com/product/integrations/a2a-protocol
A2A is an open protocol enabling AI agents to communicate and collaborate across platforms. Learn how to build and connect A2A-compatible agents.
## What is A2A?
A2A (Agent-to-Agent Protocol) is an open protocol that enables AI agents to discover and communicate with each other across different platforms and vendors. Originally launched by Google and now maintained by the Linux Foundation, A2A provides a standardized way for agents to collaborate on complex tasks.
**A2A vs MCP — What's the difference?**
* **MCP** connects agents to **tools** (databases, APIs, services)
* **A2A** connects agents to **other agents** (delegation, collaboration)
Use both together: MCP gives your agent capabilities, A2A lets it collaborate with specialized agents.
## Current Implementation
| Feature | Status |
| ------------------ | ------------------------------------------------------------ |
| **Streaming** | A2A waits for the complete response before returning results |
| **Authentication** | None or API key based |
| **Discovery** | Via `agent-card.json` |
## Core Concepts
### AgentCards
Every A2A agent exposes an **AgentCard** — a JSON file at `/.well-known/agent-card.json` that describes the agent's capabilities. This enables automatic discovery.
```json theme={null}
{
"name": "Research Assistant",
"description": "Searches and summarizes academic papers",
"url": "https://research-agent.example.com",
"version": "1.0.0",
"skills": [
{
"id": "paper-search",
"name": "Paper Search",
"description": "Search academic databases for relevant papers"
},
{
"id": "summarize",
"name": "Summarize Paper",
"description": "Generate a concise summary of a research paper"
}
]
}
```
### Communication Flow
1. **Discovery** — Client fetches `/.well-known/agent-card.json` to learn agent capabilities
2. **Task Creation** — Client sends a task request with input data
3. **Processing** — Agent processes the task and generates a complete response
4. **Response** — Agent returns the full result (no streaming)
**Complete A2A Agent Example:** The [Langdock-A2A-Demo Repository](https://github.com/matsjfunke/Langdock-A2A-Demo) contains a TypeScript-based A2A agent (protocol v0.3.0) with Express server and Langdock API integration.
## When to Use A2A
| Use Case | A2A | MCP |
| ---------------------------------------- | --- | --- |
| Query a database | | ✓ |
| Call an API | | ✓ |
| Delegate research to a specialized agent | ✓ | |
| Coordinate multiple agents on a task | ✓ | |
| Connect to external tools/services | | ✓ |
| Agent-to-agent collaboration | ✓ | |
## Resources
Official documentation and specification
Protocol specification and reference implementations
***
## Related Documentation
* [MCP Integration Guide](/resources/integrations/mcp) — Connect agents to tools
* [MCP Server Directory](/product/integrations/mcp-directory) — Verified MCP servers
* [Agent Configuration](/product/agents/configuration) — Configure your Langdock agents
# Connections
Source: https://docs.langdock.com/product/integrations/connections
Learn how connections work in Langdock, including authentication types, ownership, and sharing options for agents and workflows.
## What Are Connections?
A **connection** is a saved authentication link between Langdock and an external tool. When you connect your Google Calendar, HubSpot, or any other integration, you're creating a connection that stores the credentials needed to interact with that service.
Each connection belongs to the user who created it. This ensures your credentials remain secure and actions are performed with your access rights.
## Authentication Types
Langdock supports different authentication methods depending on the integration:
| Auth Type | How It Works | Example Integrations |
| ------------------- | ------------------------------------------------------------------- | ------------------------------------------- |
| **OAuth** | You sign in directly with the service and grant Langdock permission | Google Suite, Microsoft 365, Slack, HubSpot |
| **API Key** | You provide an API key from the service | OpenAI, Stripe, custom integrations |
| **Service Account** | An admin sets up a service-level account | Some enterprise tools |
| **None** | No authentication needed | Public APIs |
### OAuth Authentication
Most popular integrations use OAuth, the industry-standard protocol for secure authorization. When you connect via OAuth:
1. **You're redirected** to the service's login page (e.g., Google, Microsoft)
2. **You sign in** with your own account
3. **You grant permissions** for Langdock to access specific data
4. **Tokens are stored** securely and refreshed automatically
OAuth connections always act with your permissions. If you can't access a calendar in Google, the agent using your connection can't access it either.
### API Key & Service Account Authentication
These authentication types require you to manually provide credentials:
* **API Key**: Copy an API key from the service's settings and paste it into Langdock
* **Service Account**: Provide service account credentials (often a JSON file or key pair)
These are typically used for services that don't support OAuth or for admin-level integrations that need broader access.
***
## Connection Ownership
By default, **connections are user-based and not shareable**. This means:
* ✅ You can only see and use connections you created
* ✅ Actions performed use your access rights and permissions
* ✅ Your credentials are never exposed to other users
* ❌ Other users cannot directly use your OAuth connections
This design ensures security and compliance—your authorization remains yours.
### Why User-Based?
When you authorize Langdock to access your Google Calendar, you're granting permission for actions to be performed as you. Sharing that connection with others would mean they could act on your behalf, which violates the trust relationship established during OAuth consent.
***
## Sharing Connections via Agents
While connections are personal by default, there's a powerful way to share their capabilities: **attaching connections to agents**.
### How It Works
When you create an agent and add actions:
1. **Add an action** to your agent (e.g., "Create Calendar Event")
2. **Select a connection** to use with that action
3. **Share the agent** with others
Now anyone using that agent can trigger the action using your pre-configured connection—without ever seeing your credentials.
**Example:** You create a "Team Calendar Agent" with a "Create Event" action linked to your Google Calendar connection. When a colleague uses this agent to schedule a meeting, the event is created in your calendar using your credentials—but they never see your OAuth tokens.
### Use Cases
* **Team agents**: Create a shared agent that can add events to a team calendar using your connection
* **Workflow automation**: Set up workflows where actions run with your credentials
* **Departmental tools**: Build agents that interact with your CRM or project management tools
When sharing agents with pre-configured connections, the actions will be performed using your account. Only attach connections you're comfortable having others trigger.
***
## Sharing Non-OAuth Connections Directly
Connections that use **API Key**, **Service Account**, or **No Authentication** can be shared directly with other users, groups, or your entire workspace.
### Shareable Connection Types
| Auth Type | Directly Shareable? |
| --------------- | :-----------------: |
| OAuth | ❌ No |
| API Key | ✅ Yes |
| Service Account | ✅ Yes |
| None | ✅ Yes |
### Why OAuth Can't Be Shared Directly
OAuth tokens represent a specific user's authorization and consent. Sharing them would:
* Violate the user's agreement with the service provider
* Create security risks if tokens are leaked
* Make it impossible to track who performed which action
### How to Share Non-OAuth Connections
If you have an API key connection (or another shareable type), workspace admins and connection owners can share it:
1. Go to **Integrations** in your settings
2. Find the connection you want to share
3. Click **Share** and select recipients:
* **Specific users**: Share with individual team members
* **Groups**: Share with entire teams or departments
* **Workspace**: Make available to everyone
This is great for shared API keys like translation services, analytics tools, or internal APIs that the whole team should be able to use.
### Who Can Share Connections?
| Role | Can Share |
| ---------------- | ------------------------------------------- |
| Connection owner | ✅ Their own non-OAuth connections |
| Workspace admin | ✅ Any non-OAuth connection in the workspace |
| Regular user | ❌ Only through agents |
***
## Choosing the Right Sharing Method
| Scenario | Recommended Approach |
| -------------------------------------- | -------------------------------------------- |
| Team needs access to your calendar/CRM | Create an agent with pre-configured actions |
| Shared API key for a service | Share the connection directly (if non-OAuth) |
| Personal workflow automation | Use your own connection in your agent |
| Departmental tool access | Share via agent with specific groups |
***
## Summary
| Feature | OAuth | API Key / Service Account |
| ----------------------- | ----- | ------------------------------- |
| User-owned | ✅ | ✅ |
| Direct sharing | ❌ | ✅ (to users, groups, workspace) |
| Share via agent | ✅ | ✅ |
| Automatic token refresh | ✅ | N/A |
| Admin can share | ❌ | ✅ |
Understanding how connections work helps you build secure, collaborative workflows. Use agent-based sharing for OAuth connections, and direct sharing for API keys and service accounts when appropriate.
# Introduction
Source: https://docs.langdock.com/product/integrations/introduction
Langdock focuses on company use cases and integrates with your existing tools to fully leverage AI. We do this in two ways: through native integrations and by offering an easy way to build custom integrations for your specific tools.
## Native integrations
We want to make it easy to connect your tools with Langdock and leverage AI within your existing processes and workflows. That's why we built integrations to the tools our customers already use, including knowledge bases, major CRMs, Google and Microsoft Suites, and vector databases.
These integrations handle authentication, data syncing, and API connections automatically, so you can focus on using AI rather than managing technical setup.
You can find more information on this in [this guide](/resources/integrations/using-integrations).
## Build your own integrations
If we don't have a native integration for your tool yet, or you want to connect an internal system, you can build your own integrations. Our integration builder includes a visual interface for common workflows plus custom JavaScript support for complex edge cases.
This means you can connect virtually any tool that has an API, from legacy systems to modern SaaS platforms.
You can find more information in the [guide on how to create integrations](/resources/integrations/create-integrations).
# MCP Server Directory
Source: https://docs.langdock.com/product/integrations/mcp-directory
A curated directory of verified MCP servers you can connect to Langdock. All servers listed here are officially maintained by their respective companies.
**What is this directory?**
This is a curated list of **official, remotely-hosted MCP servers** that you can connect to Langdock. All servers here are maintained by the companies that created them, ensuring reliability, security, and ongoing support.
## Trust & Verification
Maintained directly by the company that owns the product
No local installation required - connect via URL
Uses secure HTTP/SSE transport protocols
***
## All MCP Servers
**30 official MCP servers available.** Click any server to see connection details.
AmplitudeOfficial}>
Behavior analytics and experimentation platform for product data insights
**Server URL**
```
https://mcp.amplitude.com/mcp
```
* **Auth Type:** OAuth
* **Transport:** Remote/HTTP
* **Docs:** [amplitude.com/docs/amplitude-ai/amplitude-mcp ↗](https://amplitude.com/docs/amplitude-ai/amplitude-mcp)
ApifyOfficial}>
Extract data from any website with thousands of scrapers, crawlers, and automations
**Server URL**
```
https://mcp.apify.com
```
* **Auth Type:** API Key
* **Transport:** Remote/HTTP
* **Docs:** [docs.apify.com/platform/integrations/mcp ↗](https://docs.apify.com/platform/integrations/mcp)
AstroOfficial}>
Access to the official Astro documentation
**Server URL**
```
https://mcp.docs.astro.build/mcp
```
* **Auth Type:** None
* **Transport:** Remote/HTTP
* **Docs:** [docs.astro.build/en/guides/build-with-ai ↗](https://docs.astro.build/en/guides/build-with-ai/#astro-docs-mcp-server)
AsanaOfficial}>
Access to Asana via the official MCP server
**Server URL**
```
https://mcp.asana.com/mcp
```
* **Auth Type:** OAuth
* **Transport:** Remote/HTTP
* **Docs:** [developers.asana.com ↗](https://developers.asana.com/docs/using-asanas-mcp-server)
Atlassian RovoOfficial}>
Search, create, and manage content across Jira, Confluence, and Compass using natural language
**Server URL**
```
https://mcp.atlassian.com/v1/sse
```
* **Auth Type:** OAuth
* **Transport:** Remote/SSE
* **Docs:** [support.atlassian.com ↗](https://support.atlassian.com/atlassian-rovo-mcp-server/docs/getting-started-with-the-atlassian-remote-mcp-server/)
BraintrustOfficial}>
Access to the documentation, experiments, and logs in Braintrust
**Server URL**
```
https://api.braintrust.dev/mcp
```
* **Auth Type:** API Key
* **Transport:** Remote/HTTP
* **Docs:** [braintrust.dev/docs/deploy/mcp ↗](https://www.braintrust.dev/docs/deploy/mcp)
Browser UseOfficial}>
Provides agents access to browser-use documentation
**Server URL**
```
https://api.browser-use.com/mcp
```
* **Auth Type:** API Key
* **Transport:** Remote/HTTP
* **Docs:** [docs.browser-use.com/customize/integrations/mcp-server ↗](https://docs.browser-use.com/customize/integrations/mcp-server)
ClickUpOfficial}>
Project management and collaboration for teams & agents
**Server URL**
```
https://mcp.clickup.com/mcp
```
* **Auth Type:** OAuth
* **Transport:** Remote/HTTP
* **Docs:** [developer.clickup.com ↗](https://developer.clickup.com/docs/connect-an-ai-assistant-to-clickups-mcp-server)
Context7Official}>
Up-to-date code documentation
**Server URL**
```
https://mcp.context7.com/mcp
```
* **Auth Type:** API Key
* **Transport:** Remote/HTTP
* **Docs:** [context7.com/docs/resources/all-clients ↗](https://context7.com/docs/resources/all-clients)
DeepWikiOfficial}>
Automatically generates architecture diagrams, documentation, and links to source code to help you understand unfamiliar codebases quickly
**Server URL**
```
https://mcp.deepwiki.com/mcp
```
* **Auth Type:** None
* **Transport:** Remote/HTTP
* **Docs:** [docs.devin.ai/work-with-devin/deepwiki-mcp ↗](https://docs.devin.ai/work-with-devin/deepwiki-mcp)
GitHubOfficial}>
Repository management, issues, PRs, and code analysis
**Server URL**
```
https://api.githubcopilot.com/mcp/
```
* **Auth Type:** Bearer Token (PAT)
* **Transport:** Remote/HTTP
* **Docs:** [docs.github.com ↗](https://docs.github.com/en/copilot/how-tos/provide-context/use-mcp/use-the-github-mcp-server)
HoneycombOfficial}>
Query observability data and SLOs
**Server URL**
```
https://mcp.honeycomb.io/mcp
```
* **Auth Type:** API Key
* **Transport:** Remote/HTTP
* **Docs:** [docs.honeycomb.io/integrations/mcp ↗](https://docs.honeycomb.io/integrations/mcp/#what-is-honeycomb-mcp)
Hugging FaceOfficial}>
Access the Hugging Face Hub and Gradio MCP Servers
**Server URL**
```
https://hf.co/mcp
```
* **Auth Type:** None / Optional
* **Transport:** Remote/HTTP
* **Docs:** [huggingface.co/docs/hub/en/hf-mcp-server ↗](https://huggingface.co/docs/hub/en/hf-mcp-server)
InstantDBOfficial}>
Query and manage InstantDB
**Server URL**
```
https://mcp.instantdb.com/mcp
```
* **Auth Type:** OAuth
* **Transport:** Remote/HTTP
* **Docs:** [instantdb.com/docs/using-llms ↗](https://www.instantdb.com/docs/using-llms#instant-mcp-server)
LinearOfficial}>
Issue tracking and project management for development teams
**Server URL**
```
https://mcp.linear.app/sse
```
* **Auth Type:** SSE
* **Transport:** Remote/SSE
* **Docs:** [linear.app/docs/mcp ↗](https://linear.app/docs/mcp)
Microsoft Learn DocsOfficial}>
Search Microsoft documentation
**Server URL**
```
https://learn.microsoft.com/api/mcp
```
* **Auth Type:** None
* **Transport:** Remote/HTTP
* **Docs:** [learn.microsoft.com/en-us/training/support/mcp ↗](https://learn.microsoft.com/en-us/training/support/mcp)
NotionOfficial}>
All-in-one workspace for notes, docs, and project management
**Server URL**
```
https://mcp.notion.com/mcp
```
* **Auth Type:** OAuth
* **Transport:** Remote/HTTP
* **Docs:** [developers.notion.com/docs/mcp ↗](https://developers.notion.com/docs/mcp)
PipedreamOfficial}>
Connect to APIs and workflows
**Server URL**
```
https://mcp.pipedream.net//
```
* **Auth Type:** Custom
* **Transport:** Remote/HTTP
* **Docs:** [pipedream.com/docs/connect/mcp ↗](https://pipedream.com/docs/connect/mcp/)
PostHogOfficial}>
Analytics, error tracking, and feature flags
**Server URL**
```
https://mcp.posthog.com/sse
```
* **Auth Type:** SSE with Bearer Token (API Key)
* **Transport:** Remote/SSE
* **Docs:** [posthog.com/docs/model-context-protocol ↗](https://posthog.com/docs/model-context-protocol)
PostmanOfficial}>
API collaboration and testing
**Server URL**
```
https://mcp.postman.com/minimal
```
* **Auth Type:** Bearer Token (API Key)
* **Transport:** Remote/HTTP
* **Docs:** [github.com/postmanlabs/postman-mcp-server ↗](https://github.com/postmanlabs/postman-mcp-server/blob/main/README.md#remote-server)
PrismaOfficial}>
Manage Prisma Postgres databases, including creating new instances and running schema migrations
**Server URL**
```
https://mcp.prisma.io/mcp
```
* **Auth Type:** OAuth
* **Transport:** Remote/HTTP
* **Docs:** [prisma.io/docs/postgres/integrations/mcp-server ↗](https://www.prisma.io/docs/postgres/integrations/mcp-server)
RenderOfficial}>
Manage your Render services
**Server URL**
```
https://mcp.render.com/mcp
```
* **Auth Type:** Bearer Token
* **Transport:** Remote/HTTP
* **Docs:** [render.com/docs/mcp-server ↗](https://render.com/docs/mcp-server)
ReplicateOfficial}>
Search, discover, compare, and run AI models with a cloud API
**Server URL**
```
https://mcp.replicate.com/sse
```
* **Auth Type:** SSE
* **Transport:** Remote/SSE
* **Docs:** [replicate.com/docs/reference/mcp ↗](https://replicate.com/docs/reference/mcp)
SanityOfficial}>
Create, query, and manage Sanity content, releases, datasets, and schemas
**Server URL**
```
https://mcp.sanity.io
```
* **Auth Type:** OAuth
* **Transport:** Remote/HTTP
* **Docs:** [sanity.io/docs/compute-and-ai/mcp-server ↗](https://www.sanity.io/docs/compute-and-ai/mcp-server)
SemgrepOfficial}>
Scan code for security vulnerabilities
**Server URL**
```
https://mcp.semgrep.ai/mcp
```
* **Auth Type:** API Key
* **Transport:** Remote/HTTP
* **Docs:** [semgrep.dev/docs/mcp ↗](https://semgrep.dev/docs/mcp)
SentryOfficial}>
Error tracking and performance monitoring
**Server URL**
```
https://mcp.sentry.dev/mcp
```
* **Auth Type:** API Key
* **Transport:** Remote/HTTP
* **Docs:** [docs.sentry.io/product/sentry-mcp ↗](https://docs.sentry.io/product/sentry-mcp/)
SupabaseOfficial}>
Create and manage Supabase projects
**Server URL**
```
https://mcp.supabase.com/mcp
```
* **Auth Type:** OAuth
* **Transport:** Remote/HTTP
* **Docs:** [supabase.com/docs/guides/getting-started/mcp ↗](https://supabase.com/docs/guides/getting-started/mcp)
SuperglueOfficial}>
Discover and execute pre-built tools
**Server URL**
```
https://mcp.superglue.ai
```
* **Auth Type:** Bearer Token (API Key)
* **Transport:** Remote/HTTP
* **Docs:** [docs.superglue.cloud/mcp/using-the-mcp ↗](https://docs.superglue.cloud/mcp/using-the-mcp)
WebflowOfficial}>
Enhances an agent's understanding of your Webflow projects
**Server URL**
```
https://mcp.webflow.com/sse
```
* **Auth Type:** SSE
* **Transport:** Remote/SSE
* **Docs:** [developers.webflow.com/mcp/reference/overview ↗](https://developers.webflow.com/mcp/reference/overview)
WixOfficial}>
Build and manage Wix sites
**Server URL**
```
https://mcp.wix.com/mcp
```
* **Auth Type:** OAuth
* **Transport:** Remote/HTTP
* **Docs:** [dev.wix.com/docs/sdk/articles/use-the-wix-mcp ↗](https://dev.wix.com/docs/sdk/articles/use-the-wix-mcp/about-the-wix-mcp)
***
## How to Connect an MCP Server
Go to **Integrations** in your workspace settings.
Click **Add Integration** and select **MCP** as the integration type.
Copy the **Server URL** from the directory above and configure the authentication method.
Click **Save**, then **Add Connection** to authenticate with the service.
Your MCP server is now available in agents, chats, and workflows.
For detailed setup instructions, see the [MCP Integration Guide](/resources/integrations/mcp).
***
## Authentication Types Explained
| Auth Type | Description | What You Need |
| ----------- | --------------------------------------------------- | ------------------------------------ |
| **OAuth** | Full OAuth 2.0 flow with automatic token management | Click "Add connection" and authorize |
| **API Key** | Simple key-based authentication | API key from the service's dashboard |
# Introduction
Source: https://docs.langdock.com/product/introduction
Welcome to the product section of our knowledge base! This section is designed for users of our platform and provides an in-depth understanding of each of our products.
## Welcome to Langdock's Product Guide!
We provide five distinct products in one platform:
* [Chat:](/product/chat/functionalities) A simple interface where you can "chat" with an AI model.
* [Assistants:](/product/assistants/introduction) Build your own chatbots for specific tasks or situations.
* [API:](/api-endpoints/api-introduction) Use AI models in other applications or build your own products powered by Langdock AI models.
* [Integrations:](/product/integrations/introduction) Integrate your existing toolstack into Langdock to use the full potential of AI with your company knowledge.
* [Workflows:](/product/workflows/introduction) Build powerful AI-driven automations by connecting agents, integrations, and custom logic into multi-step workflows.
If you have any questions or feedback, send us an email to [support@langdock.com](mailto:support@langdock.com) or use the support chat in the bottom right.
# Core Concepts
Source: https://docs.langdock.com/product/workflows/core-concepts
Understand the fundamental building blocks of workflows including nodes, triggers, connections, and execution.
## Introduction
Welcome to Workflows fundamentals. This guide covers everything you need to understand how workflows work - from individual nodes to execution patterns, versioning, and testing strategies.
**First time here?** This is your roadmap. Read through to understand the core
concepts, then check out the [Getting Started
guide](/product/workflows/getting-started) to build your first workflow.
## What is a Node?
A **node** is the fundamental building block of a workflow. Think of it like a step in a recipe - each node does one thing, and you chain them together to create your complete automation.
Nodes can do all sorts of things:
* Execute AI agents to analyze or generate content
* Make HTTP requests to external APIs
* Transform data with custom code
* Route execution based on conditions
* Send notifications to team members
You connect nodes together to create a complete workflow that automates your process from start to finish.
**Visual Programming**: Workflows use a visual canvas where you drag, drop,
and connect nodes. No coding required (unless you want to use a Code node)!
## Understanding Node Structure
Every node has a few common elements that make them easy to work with:
### The Node Header
Shows you the node type and name. Click the play button here to test the node individually - super helpful for debugging!
### Input and Output Tabs
After a node runs, you can click on it to see:
* **Input tab**: What data the node received
* **Output tab**: What data the node produced
This data flows to the next nodes in your workflow.
### Configuration Panel
Click any node to open its settings in the side panel. This is where you configure what the node does, select AI models, set up connections to external services, and more.
## Working with Variables
**Data flows through variables.** When a node finishes running, its output
becomes available as a variable that you can use in later nodes. This is how
information moves through your workflow.
### The Basics
Use double curly braces to reference variables:
```handlebars theme={null}
{{form1.output.email}}
{{analyze_data.output.structured.summary}}
{{api_response.data.items[0].title}}
```
### Configuring Fields with Variables
Every node has fields that need configuration. You can use variables in most text fields, or choose from three field modes:
* **Auto Mode**: Let the workflow automatically determine the best value
* **Manual Mode**: Enter exact values or use variables like `{{form1.output.email}}`
* **AI Prompt Mode**: Give instructions for AI to generate dynamic content
Understand Auto, Manual, and AI Prompt modes and when to use each one
**💡 Pro tip**: When you type `{{` in any field, you'll see a dropdown showing all available variables from previous nodes.
***
## How Workflows Execute
**Understanding execution is key.** Workflows can run nodes sequentially (one
after another) or in parallel (simultaneously). Knowing how execution works
helps you build faster, more efficient automations.
### Sequential Execution
By default, your workflow runs nodes one after another, following the connections you've drawn. Each node waits for the previous one to complete before starting. Simple and predictable!
### Parallel Execution
When you connect multiple nodes to a single source node, they can run at the same time. This speeds up your workflow when nodes don't depend on each other.
For example:
```text theme={null}
→ Send Email →
Trigger → → Create Ticket → → Continue
→ Update Database →
```
All three nodes after the trigger can run in parallel since they're independent.
### Error Handling
You control what happens when something goes wrong. For each node, you can choose:
* **Fail workflow**: Stop everything and mark the run as failed (good for critical steps)
* **Continue workflow**: Log the error but keep going (good for non-critical actions)
* **Error callback**: Route to different nodes on error using the red error handle (good for fallback logic)
You can configure error handling for each node individually using the node settings.
***
## Workflow Versions
**Version control for workflows.** Every workflow has versions, similar to Git
for code. This keeps production stable while you make changes safely in draft
mode.
### Draft Version (v0)
This is your sandbox. Make changes, test things out, and experiment without affecting anything in production. The draft version is always labeled **v0**.
### Published Versions
When you're happy with your changes, publish them to create a new version (v1.0.0, v1.1.0, etc.). Published versions are:
* **Immutable**: They can't be changed, which keeps production stable
* **Activated for triggers**: Only active published versions respond to real triggers
* **Documented**: Each includes a description of what changed
* **Rollback-ready**: You can reactivate an older version if needed
When publishing, you choose the bump type:
* **Patch** (1.0.0 → 1.0.1): Bug fixes and small tweaks
* **Minor** (1.0.0 → 1.1.0): New features or nodes
* **Major** (1.0.0 → 2.0.0): Breaking changes
Test everything in the draft version before publishing. You can run full
workflow tests without affecting production triggers.
***
## Testing Your Workflow
**Test early, test often.** Don't wait until your workflow is complete to
start testing. Test each node as you build it to catch issues early.
### Testing Individual Nodes
Click the play button on any node to test just that piece. This is great for:
* Verifying an agent's output before building the rest of the workflow
* Checking if an API call returns the expected data
* Debugging issues in specific nodes
### Testing Complete Workflows
Use the "Test run" button on the trigger node to execute the entire workflow with sample data. This runs through all nodes but doesn't mark events as processed, so you can test safely.
### Viewing Results
After any test run, click on nodes to see:
* **Input**: What data the node received
* **Output**: What it produced
* **Messages**: For agent nodes, the full AI conversation
* **Logs**: For code nodes, any console output
* **Usage**: How many AI credits were consumed
***
## Connections Between Nodes
**Connections define flow.** The lines between nodes aren't just visual - they
determine the order of execution and how data flows through your workflow.
### Connection Handles
Nodes have connection points (handles) on their sides:
* **Left side (input)**: Where execution flows in
* **Right side (output)**: Where execution flows out
* **Multiple outputs**: Condition nodes have one output per condition
* **Red handle**: Special error output (appears when error handling is enabled)
### Drawing Connections
Click and drag from an output handle to an input handle. The line shows the execution flow. Workflows execute in the order defined by these connections.
***
## Workflow Status
Your workflow can be in one of three states:
Still being built - no automatic triggers fire. Safe for making changes and
testing.
Published and responding to triggers automatically. This is production mode.
Exists but temporarily disabled. Good for maintenance or debugging without
deleting the workflow.
***
## Human Oversight
For sensitive actions like financial transactions or data deletions, you can require manual approval before proceeding:
Add manual approval steps for important actions requiring human oversight
## Best Practices
* **Give nodes clear names**: "Analyze feedback" not "Agent 1"
* **Keep workflows focused**: One workflow, one purpose
* **Test early and often**: Test each node as you build
* **Add comments**: Document complex logic for your future self
## Next Steps
Now that you understand the fundamentals, dive deeper into specific topics:
Configure node fields with Auto, Manual, and AI Prompt modes
Add manual approval for sensitive actions
Build your first workflow step-by-step
Explore all available triggers and nodes
# Cost Management
Source: https://docs.langdock.com/product/workflows/cost-management
Understand workflow costs and learn strategies to optimize spending while maintaining performance.
## Understanding Workflow Costs
Workflows consume AI credits based on what they do. The main cost drivers are:
### AI Agent Nodes
The biggest expense in most workflows. Costs depend on:
* **Model used**: GPT-4 costs more than GPT-3.5, Claude Opus more than Haiku
* **Input length**: How much data you send to the agent
* **Output length**: How much the agent generates
* **Tool usage**: Web searches, code execution, and integrations add costs
### Action Nodes
Generally low cost or free:
* **Integration actions**: Usually free (no AI involved)
* **HTTP requests**: Free within your workflow execution
* **Notifications**: Free
### Other Costs
* **Web Search nodes**: Small fee per search
* **Code nodes**: Free (no AI usage)
* **Condition/Loop nodes**: Free (just logic)
You can view exact costs for each node after a test run. Click on the node and
check the Usage tab.
## Monitoring Costs
### Per-Run Costs
After each workflow run, you can see:
1. Go to the **Runs** tab
2. Click on any run
3. View total cost and per-node breakdown
4. Check which nodes consumed the most credits
### Workflow-Level Costs
Track spending over time:
1. Go to workflow settings
2. View the **Usage** section
3. See daily, weekly, and monthly costs
4. Download detailed usage reports
## Setting Cost Limits
Protect yourself from unexpected charges by setting spending limits:
### Monthly Limit
Set a maximum spending cap for the entire workflow:
1. Go to workflow settings
2. Set **Monthly Limit** (e.g., \$100)
3. Workflow automatically pauses when limit is reached
4. You'll receive notifications at 50%, 75%, and 90%
### Per-Execution Limit
Prevent runaway costs from a single run:
1. Set **Execution Limit** (e.g., \$5 per run)
2. Workflow stops if a single run exceeds this amount
3. Useful for preventing issues with loops or retries
### Alert Thresholds
Get notified before hitting limits:
1. Add custom alert amounts (e.g., $25, $50, \$75)
2. Receive notifications when crossing each threshold
3. Team members can be added as notification recipients
When a workflow hits its spending limit, it pauses automatically. You'll need
to increase the limit or wait until the next month to resume.
## Optimization Strategies
### Choose the Right Model
Don't use premium models for simple tasks:
**Over-powered:**
```text theme={null}
Task: Extract email from text
Model: Claude Sonnet 4.5 ❌ (expensive)
```
**Right-sized:**
```text theme={null}
Task: Extract email from text
Model: GPT-4.1 mini ✅ (fast and cheap)
```
### Optimize Agent Prompts
Shorter, clearer prompts cost less and work better:
**Efficient:**
```text theme={null}
Analyze this feedback. Return:
- Sentiment: positive/neutral/negative
- Urgency: low/medium/high
- Key issue (1 sentence)
Feedback: {{trigger.message}}
```
Use structured outputs. They're more reliable and prevent the model from
generating unnecessary explanatory text.
### Use Code for Simple Transformations
Don't use AI for tasks that code can handle:
**Expensive:**
```text theme={null}
Agent: "Convert this date to YYYY-MM-DD format" ❌
```
**Free:**
```python theme={null}
# Code Node ✅
from datetime import datetime
date = datetime.strptime(trigger.date, "%m/%d/%Y")
return {"date": date.strftime("%Y-%m-%d")}
```
**When to use code instead of AI:**
* Date/time formatting
* Mathematical calculations
* Data filtering and sorting
* String manipulation
* JSON parsing/formatting
## Cost-Effective Patterns
### Smart Filtering
Filter data before sending to AI:
```text theme={null}
Trigger (100 items) → Code: Filter relevant items (20 items)
→ Agent: Process 20 items (not 100)
```
### Progressive Enhancement
Start cheap, escalate only if needed:
```text theme={null}
Data → Quick check (regex/code) → [SIMPLE] → Done
→ [COMPLEX] → AI analysis
```
## Estimating Costs Before Launch
Before activating a workflow, estimate monthly costs:
### 1. Count Expected Runs
How often will this workflow trigger?
* Forms: Expected submissions per month
* Scheduled: Runs per day × 30
* Webhooks: Events per month from integration
### 2. Test with Real Data
Run 5-10 tests with realistic data and check costs:
```text theme={null}
Example:
- Test run 1: $0.12
- Test run 2: $0.15
- Test run 3: $0.11
- Average: $0.13 per run
```
### 3. Calculate Monthly Estimate
```text theme={null}
Average cost per run: $0.13
Expected monthly runs: 1,000
Estimated monthly cost: $130
Add 20% buffer: $156
```
### 4. Set Appropriate Limits
```text theme={null}
Set monthly limit: $200 (includes buffer)
Set per-run limit: $1.00 (catches anomalies)
```
## Cost vs. Value
Remember: The goal isn't to spend zero - it's to get maximum value for your spending.
### When to Spend More
It's worth paying for:
* **Time savings**: If it saves hours of manual work
* **Quality improvements**: Better AI models for critical decisions
* **Scalability**: Automating tasks that don't scale manually
## Next Steps
Understand workflow execution and optimization
Optimize data handling in workflows
Understand agent node costs and optimization
Build your first workflow
# Field Modes
Source: https://docs.langdock.com/product/workflows/field-modes
Understand how to configure node fields with Auto, Manual, and AI Prompt modes for flexible and powerful workflows.
## Overview
Every node in your workflow has fields that need to be configured—recipient emails, message content, API parameters, or data to process. Langdock gives you three intelligent ways to fill these fields, each optimized for different scenarios.
**Choose the right mode** for each field to balance flexibility, control, and
AI credit usage.
## The Three Field Modes
These modes apply to most node fields. Some nodes like **Condition** have
their own modes (Manual and Prompt AI) optimized for their specific purpose.
### Auto Mode
Let the workflow automatically determine the best value based on context from previous nodes.
**How it works:**
* Analyzes all data from previous nodes in the workflow
* Intelligently matches field requirements with available data
* Automatically fills the field with the most appropriate value
**Best for:**
* Straightforward data mapping (form data → spreadsheet)
* Fields where the context is obvious
* When you want flexibility and don't need exact control
* Reducing configuration time
**Example:**
```text theme={null}
Scenario: Sending email based on form submission
Field: "Recipient Email"
Mode: Auto
Result: Automatically uses email from form trigger
```
**Advantages:**
* ✅ Fast setup—minimal configuration needed
* ✅ Very flexible—adapts to varying data structures
* ✅ Great for covering edge cases automatically
* ✅ Handles unexpected data gracefully
**Disadvantages:**
* ❌ Consumes AI credits (uses AI to determine values)
* ❌ Less control over exact output
* ❌ Can get expensive with frequent executions
***
### Manual Mode
Enter exact values or reference specific data from previous nodes with complete control.
**How it works:**
* What you type is exactly what appears (no AI processing)
* Use variables to insert data from previous nodes: `{{trigger.name}}`
* Combine static text with dynamic variables
**Best for:**
* Fixed values (API keys, webhook URLs, signatures)
* Exact data references you want to control
* Combining static text with variables
* When you need predictable, consistent output
**Example:**
```handlebars theme={null}
Field: "Email Subject" Mode: Manual Value: "New order #{{trigger.output.order_id}}
from
{{trigger.output.customer_name}}" Result: "New order #12345 from Alice Smith"
```
**Advantages:**
* ✅ Complete control over output
* ✅ No AI credits used
* ✅ Predictable and consistent
* ✅ Fastest execution (no AI processing)
* ✅ Perfect for templates and fixed formats
**Disadvantages:**
* ❌ Requires manual configuration
* ❌ Can't generate dynamic content based on context
* ❌ You must know exact variable paths
***
### AI Prompt Mode
Give natural language instructions for the AI to generate dynamic content based on workflow data.
**How it works:**
* Write instructions describing what content should be generated
* AI analyzes data from previous nodes and creates appropriate content
* Can reference specific data using variables: `{{trigger.output..message}}`
* Generates unique output each time based on context
**Best for:**
* Email composition (subject lines, body content)
* Content generation (summaries, descriptions)
* Dynamic messages that adapt to context
* Transforming or reformatting data creatively
**Example:**
```text theme={null}
Field: "Email Body"
Mode: AI Prompt
Instructions: "Write a friendly 2-paragraph response to {{trigger.message}},
addressing their concern about {{agent.issue_category}} and offering a solution."
Result: Personalized email generated based on actual message content
```
**Advantages:**
* ✅ Creates intelligent, context-aware content
* ✅ Adapts to different situations automatically
* ✅ Saves time writing templates
* ✅ Can combine multiple data sources creatively
* ✅ Produces natural, varied output
**Disadvantages:**
* ❌ Consumes AI credits
* ❌ Slightly slower than manual mode
* ❌ Output may vary between executions
* ❌ Requires clear prompting for best results
## Choosing the Right Mode
Use this decision tree to select the appropriate mode:
```text theme={null}
Is the value always the same?
└─ Yes → Manual Mode (with static text)
Is the value from a previous node, unchanged?
└─ Yes → Manual Mode (with variables)
Do you need AI to generate or transform content?
├─ Yes → AI Prompt Mode
└─ No → Auto Mode
```
## Mode Comparison
| Feature | Auto | Manual | AI Prompt |
| ---------------------- | -------------- | ------------ | ------------------ |
| **Configuration Time** | Fast | Medium | Medium |
| **AI Credits** | Yes | None | Yes |
| **Control** | Low | High | Medium |
| **Flexibility** | High | Low | High |
| **Best For** | Simple mapping | Fixed values | Content generation |
## Real-World Examples
### Customer Support Workflow
```text theme={null}
Trigger: Form submission (customer complaint)
Action: Send email response
├─ To: Auto (uses form.email automatically)
├─ From: Manual → "support@company.com"
├─ Subject: AI Prompt → "Write a professional subject line about {{form.output.issue_type}}"
└─ Body: AI Prompt → "Draft a helpful response to: {{form.output.message}}"
```
### Data Processing Workflow
```text theme={null}
Trigger: New spreadsheet row
Action: Create CRM record
├─ Name: Manual → "{{trigger.output.customer_name}}"
├─ Email: Auto (matches automatically)
├─ Notes: AI Prompt → "Summarize: {{trigger.output.feedback}}"
└─ Status: Manual → "New Lead"
```
### Notification Workflow
```text theme={null}
Agent: Analyze document
Action: Send notification
├─ Message: AI Prompt → "Create an executive summary of {{agent.output.analysis}}"
└─ Recipient: Manual → "team@company.com"
```
## Using Variables in Manual and AI Prompt Modes
Both Manual and AI Prompt modes support variables from previous nodes:
**Syntax:**
```handlebars theme={null}
{{node_name.output.field_name}}
{{node_name.output.nested.field}}
{{node_name.output.array[0].property}}
```
**Examples:**
```handlebars theme={null}
{{trigger.output.email}}
← Form field
{{agent.output.summary}}
← Agent output
{{http_request.output.data.userId}}
← API response
{{loop_item.output.name}}
← Current loop item
```
Type `{{` in any Manual or AI Prompt field to see available variables from previous nodes.
## Best Practices
Begin with Auto mode for most fields. If it doesn't work as expected, switch to Manual or AI Prompt for more control.
API keys, email addresses, signatures, and webhook URLs should always use
Manual mode for consistency and efficiency.
Instead of "write an email", use "write a professional 2-paragraph email
thanking for their interest in ".
Both Auto and AI Prompt modes consume AI credits. Use Manual mode for fixed
values and simple variable references to reduce costs.
Run test workflows with actual data to verify each mode behaves as expected before publishing.
## Next Steps
Learn how nodes work together in workflows
Learn about Manual and Prompt AI modes for conditions
Use AI for content analysis and generation
Build your first workflow
# Getting Started
Source: https://docs.langdock.com/product/workflows/getting-started
Build your first workflow with this step-by-step guide. Learn how to create a simple automation that processes form submissions with AI.
## What You'll Build
In this guide, you'll create a workflow that:
1. Receives form submissions for customer feedback
2. Uses AI to analyze the sentiment and categorize the feedback
3. Routes high-priority feedback to notify the team
4. Creates a record in your system for all feedback
This covers the essential workflow patterns you'll use in most automations.
## Prerequisites
* Your workspace admin enabled workflows in the workspace settings
* Basic understanding of your use case and goals
**Estimated time**: 15 minutes to complete this tutorial
## Step 1: Create a New Workflow
Navigate to the Workflows section and click **Create Workflow**.
1. Give your workflow a descriptive name: "Customer Feedback Processor"
2. Click **Create**
You'll see a blank canvas with a starter node in the center.
The starter node lets you choose how to begin your workflow. Let's create a form trigger:
1. Click **Form** from the trigger options
2. The trigger node appears on the canvas
### Configure the Form
Click on the trigger node to open the configuration panel:
1. **Form Title**: "Customer Feedback Form"
2. **Add Form Fields**:
* Field 1: `name` (Text, Required) - Label: "Your Name"
* Field 2: `email` (Text, Required) - Label: "Email Address"
* Field 3: `feedback` (Multi-line Text, Required) - Label: "Your Feedback"
* Field 4: `product` (Select, Required) - Label: "Product"
* Options: "Mobile App", "Web Platform", "API", "Other"
3. **Form Settings**:
* Enable "Make this form public" to allow external submissions
4. Click **Save**
**Pro tip**: Click the **Copy URL** button to get your form link. You can share this with customers or embed it on your website once you've deployed the workflow and made it public.
## Step 3: Add an Agent Node
Now let's add AI to analyze the feedback:
1. Click the **+** button on the trigger's output handle (right side)
2. Select **Agent** from the node menu
3. The agent node appears, connected to your trigger
### Configure the Agent
Click on the agent node:
1. **Name**: "Analyze Feedback"
2. **Agent Mode**: Create new agent
3. **Agent Name**: "Feedback Analyzer"
4. **Instructions**:
```text theme={null}
You are analyzing customer feedback. For the given feedback:
1. Determine the sentiment: positive, neutral, or negative
2. Identify the main category: bug, feature_request, complaint, praise, or question
3. Assign a priority level: low, medium, or high
4. Extract the key issue or topic in one sentence
Provide your analysis in the structured format.
```
5. **Input**: Add the feedback variable to the prompt
```handlebars theme={null}
{{form1.output}}
```
6. **Model**: Select your preferred model (e.g., GPT-4.1 or Claude Sonnet 4.5)
7. **Enable Structured Output**:
* Click **Add Output Field**
* Field 1: `sentiment` (String) - "The sentiment of the feedback"
* Field 2: `category` (String) - "The category of feedback"
* Field 3: `priority` (String) - "Priority level: low, medium, or high"
* Field 4: `summary` (String) - "One-sentence summary of the issue"
8. Click **Save**
### Test the Agent
Before continuing, test the agent node:
1. Click the **play button** on the trigger node
2. Fill in test data in the form that appears
3. Click **Run**
4. Once complete, click on the agent node
5. View the **Output** tab to see the analysis
You should see structured data like:
```json theme={null}
{
"sentiment": "negative",
"category": "bug",
"priority": "high",
"summary": "Mobile app crashes when uploading images"
}
```
## Step 4: Add a Condition Node
Let's route high-priority feedback differently:
1. Click the **+** button on the agent node's output
2. Select **Condition**
3. Name it "Check Priority"
### Configure Conditions
1. Click **Add Condition**
2. **Condition 1**:
* Name: "High Priority"
* Mode: Manual
* Expression: `{{agent.output.structured.priority === "high"}}`
3. Click **Add Condition** again
4. **Condition 2**:
* Name: "Normal Priority"
* Mode: Manual
* Expression: `{{agent.output.structured.priority != "high"}}` (this catches everything else)
Each condition now has its own output handle on the right side of the node.
## Step 5: Add Notification for High Priority
For high-priority feedback, let's notify the team:
1. Click the **+** button on the "High Priority" condition handle
2. Select **Send Notification**
### Configure the Notification
**Prompt**:
```handlebars theme={null}
A high-priority customer feedback was received: Customer:
{{form1.output.name}}
({{form1.output.email}}) Product:
{{form1.output.product}}
Category:
{{agent.output.structured.category}}
Sentiment:
{{agent.output.structured.sentiment}}
Issue:
{{agent.output.structured.summary}}
Full Feedback:
{{form1.output.feedback}}
Please review and respond promptly.
```
4. Click **Save**
## Step 6: Add an Action Node
Now let's save all feedback to your system. This example uses a generic HTTP request, but you can use any integration:
1. Click **+** on **both** condition handles (we want to save all feedback)
2. Select **Google Sheets**
### Configure the Google Sheets Node
1. **SpreadsheetId**: Insert your spreadsheetId
2. **Range**: You can leave this on auto.
3. **Value Input**: Set to "Prompt AI" and insert the following:
```handlebars theme={null}
{
Please insert structured
{{form1.output.name}}
({{form1.output.email}})
Product:
{{form1.output.product}}
Category:
{{agent.output.structured.category}}
Sentiment:
{{agent.output.structured.sentiment}}
Issue:
{{agent.output.structured.summary}}
Full Feedback:
{{form1.output.feedback}}
}
```
5. Click **Save**
**Alternative**: If you don't have an API, you can use an integration action
instead, like adding a row to Google Sheets or creating a Notion page.
## Step 7: Test the Complete Workflow
Now test the entire workflow:
1. Click on the trigger node
2. Click the **Test run** button in the toolbar
3. Fill in the test form with sample data
4. Click **Run workflow**
5. Watch as each node executes in sequence
6. Click on nodes to view their inputs and outputs
Try testing with:
* Different priority levels
* Various types of feedback
* Different products
Make sure both paths (high priority and normal priority) work correctly.
## Step 8: Deploy Your Workflow
Once testing is complete, deploy the workflow to activate it:
1. Click the **Deploy** button in the top right
2. Choose version type: **Major** (new Workflow)
3. Add description: "Feedback Collection"
4. Click **Deploy**
Your workflow is now **v1.0.0** and will process real form submissions.
## Step 9: Share Your Form
Get the form URL to share with customers:
1. Click on the trigger node
2. Click **Copy URL** in the node toolbar
3. Share this URL via email, website, or support portal
**Pro tip**: You can embed the form in your website using an iframe or link to
it from your support page.
## Step 10: Monitor Workflow Runs
After your workflow is live, monitor its performance:
1. Go to the **Runs** tab at the bottom of the canvas
2. View all workflow executions
3. Click on any run to see detailed execution data for each node
4. Check the **Usage** section to monitor AI credit consumption
## What You've Learned
Congratulations! You've built a complete workflow that demonstrates:
* ✅ Creating form triggers to capture data
* ✅ Using AI agents to analyze content
* ✅ Implementing conditional logic to route execution
* ✅ Sending notifications
* ✅ Integrating with external systems
* ✅ Testing and deploying workflows
## Next Steps
### Enhance This Workflow
Try adding:
* A **Code Node** to calculate metrics or transform data
* An **Action Node** to automatically create support tickets
* Additional conditions for different priority levels
### Explore Advanced Features
Control and optimize workflow costs
## Common Questions
Published versions are immutable, but you can always edit the draft version (v0) and publish a new version when ready. This ensures your production workflows stay stable while you make changes.
By default, the workflow stops and marks the run as failed. You can configure
error handling strategies for each node, including continuing on error or
routing to error-handling paths.
Workflows consume AI credits based on the models and nodes used. You can set spending limits and view detailed usage in the workflow settings. Check the Usage tab after test runs to estimate costs.
## Get Help
Need assistance? We're here to help:
* Check the [Core Concepts](/product/workflows/core-concepts) to understand workflow fundamentals
* Review [Variable Usage](/product/workflows/variable-usage) for advanced data handling
* Contact support at [support@langdock.com](mailto:support@langdock.com)
# Human in the Loop
Source: https://docs.langdock.com/product/workflows/human-in-the-loop
Pause workflow execution and require manual approval from the workflow owner before proceeding.
## Overview
Human in the Loop (HITL) allows you to pause workflow execution and require manual approval before proceeding. When a workflow reaches an approval step, it stops and waits for you (the workflow owner) to review and approve before continuing.
## How It Works
When a workflow reaches an approval step:
1. Workflow pauses execution
2. You receive a notification as the workflow owner
3. You review the workflow details and approve to continue
4. Workflow resumes and executes the next steps
Currently, only the **workflow owner** can approve paused workflows. Sharing approval rights with other team members is not yet available.
## Example Use Cases
### Financial Transactions
```text theme={null}
Trigger: Invoice received
→ Agent: Extract invoice details
→ Approval: Review payment
└─ Shows: Payment amount and vendor
→ Action: Create payment
→ Notification: Confirm payment processed
```
**Why approval needed:** Financial transactions should have oversight, especially for amounts over a certain threshold.
### Data Deletion
```text theme={null}
Trigger: Cleanup request
→ HTTP Request: Fetch old records
→ Code: Filter records older than 90 days
→ Approval: Review deletion
└─ Shows: Record count and preview
→ Action: Delete records
→ Notification: Confirm deletion complete
```
**Why approval needed:** Data deletion is irreversible and requires verification.
### Customer Communications
```text theme={null}
Trigger: Form submission
→ Agent: Generate response
→ Approval: Review message
└─ Shows: Email draft generated by agent
→ Action: Send email to customer
→ Notification: Email sent confirmation
```
**Why approval needed:** Customer-facing communications represent your brand and may need quality review.
### Production Changes
```text theme={null}
Trigger: Manual or scheduled
→ Agent: Review configuration changes
→ Approval: Review deployment
└─ Shows: Change summary
→ Action: Update production system
→ Notification: Deployment complete
```
**Why approval needed:** Production changes carry risk and benefit from review.
## When to Use Human in the Loop
**✅ Good use cases:**
* Financial transactions over a threshold
* Data deletions or irreversible operations
* Customer communications requiring review
* Production system changes
* Compliance-sensitive actions
* High-value decisions
**❌ Avoid for:**
* Routine, low-risk actions
* Steps that need to run immediately
* Actions that happen frequently throughout the day
* Workflows where manual approval becomes a bottleneck
## Combining with Conditions
Smart approval workflows use conditions to require approval only when needed:
```text theme={null}
Agent: Calculate invoice amount
Condition: Amount > $5000?
├─ Yes → Approval: Review high-value payment
│ → Action: Create payment
└─ No → Action: Create payment (auto-approved)
```
This pattern gives you:
* Automation for routine cases
* Oversight for exceptional cases
* Efficient use of your time
## Best Practices
Don't require approval for every step—focus on actions with real risk or significant impact. Too many approvals slow down automation benefits.
Ensure the workflow provides enough information at the approval step. Include relevant details like amounts, recipients, or data previews so you can make informed decisions.
Since only you (the workflow owner) can approve, consider your availability. For time-sensitive workflows, ensure you can respond promptly.
Run test workflows to ensure approval notifications arrive correctly and you have the information needed to approve confidently.
## Next Steps
Get notified when approval is needed
Add conditional logic to your workflows
Build your first workflow
Understand workflow fundamentals
# Introduction to Workflows
Source: https://docs.langdock.com/product/workflows/introduction
Build powerful AI-driven automations by connecting AI agents, integrations, and custom logic into multi-step workflows.
## What are Workflows?
Workflows are the next evolution in your Langdock journey. You've already experienced the power of **Chat** for interactive conversations, **Agents** for specialized AI helpers, and **Integrations** for connecting your tools. **Workflows** brings all of these together into end-to-end automations.
Think of Workflows as your orchestration layer - where you can chain multiple steps together, add conditional logic, loop through data, and create sophisticated automations that run automatically. Whether triggered by a form submission, a schedule, or an event in your connected apps, Workflows handle complex, multi-step processes from start to finish.
## Why Workflows?
### More Powerful Than Chat
While Chat is perfect for interactive Q\&A, Workflows automate entire processes without human intervention. Set them up once, and they run reliably 24/7.
### More Flexible Than Agents
Agents are great for specific tasks, but Workflows let you combine multiple AI agents, add custom logic, integrate with external APIs, and create sophisticated decision trees.
### More Than Integrations
Integrations connect your apps, but Workflows orchestrate complex workflows across those apps - with AI at every step to analyze, decide, and adapt.
## Ready to Get Started?
Follow our quickstart guide to create a working automation in 5 minutes [Get
Started →](/product/workflows/getting-started)
Understand how nodes, triggers, and execution work [Read Core Concepts
→](/product/workflows/core-concepts)
Discover all available nodes and what they can do [View Triggers
→](/product/workflows/nodes/manual-trigger)
## Explore by Category
Build your first workflow in 15 minutes
Learn the building blocks of workflows
Understand how to use static values and dynamic variables
Add manual approval steps to your workflows
Learn how to reference data between nodes
Monitor and control workflow spending
Workflows must be activated by an admin in workspace settings. Usage may need a subscription upgrade and will consume AI credits. Set cost limits and enable monitoring to control spending. Learn more in [Cost
Management](/product/workflows/cost-management).
# Action
Source: https://docs.langdock.com/product/workflows/nodes/action-node
Perform actions in connected applications like creating tasks, sending messages, or updating records.
## Overview
The Action node executes operations in your connected integrations - send Slack messages, create CRM records, add rows to spreadsheets, update project tasks, and more.
**Best for**: Creating/updating records, sending messages, triggering actions
in connected apps, and integrating with external services.
## Configuration
1. **Select Integration**: Choose from connected apps (Slack, Google Sheets, Notion, etc.)
2. **Choose Action**: Select specific action (Send message, Create record, etc.)
3. **Map Fields**: Provide data from previous nodes using variables
4. **Configure Settings**: Action-specific options
## Common Actions
**Slack**
* Send message to channel
* Send direct message
* Create channel
* Add reaction
**Google Sheets**
* Add row
* Update row
* Delete row
* Update cell
**Gmail**
* Send email
* Create draft
* Add label
**Notion**
* Create page
* Update database record
**CRM (Salesforce, HubSpot)**
* Create/update contact
* Create/update deal
* Add activity log
## Example
```handlebars theme={null}
Slack: Send Message Channel: #support Message: New ticket #{{trigger.ticket_id}}
Priority:
{{agent.output.priority}}
Assigned to:
{{condition.output.assignee}}
```
## Next Steps
Trigger workflows from integration events
Build your first workflow
# Agent
Source: https://docs.langdock.com/product/workflows/nodes/agent-node
Use AI to analyze data, make decisions, generate content, and extract structured information.
## Overview
The Agent node is where AI comes into your workflow. It can analyze text, make intelligent decisions, extract structured data, generate content, answer questions, and much more - all using natural language instructions.
**Best for**: Content analysis, categorization, data extraction,
decision-making, summarization, and any task requiring intelligence.
## When to Use Agent Node
**Perfect for:**
* Analyzing and categorizing content
* Extracting structured data from unstructured text
* Making decisions based on criteria
* Generating summaries or reports
* Sentiment analysis
* Answering questions about data
* Content generation
* Translation and language tasks
**Not ideal for:**
* Simple data transformations (use Code Node)
* Mathematical calculations (use Code Node)
* Direct API calls (use HTTP Request Node)
## Configuration
### Select or Create Agent
**Use Existing Agent**
* Choose from your workspace agents
* Inherits agent's configuration and knowledge
* Consistent behavior across chat and workflows
**Create New Agent**
* Define agent specifically for this workflow
* Configure independently
* Optimized for automation
### Agent Instructions
Provide clear instructions for what the agent should do:
**Good Instructions:**
```text theme={null}
Analyze the customer feedback and determine:
1. Sentiment (positive, neutral, negative)
2. Main topic category (product, service, pricing, support)
3. Urgency level (low, medium, high)
4. Key issues mentioned
Feedback: {{trigger.output.feedback_text}}
```
**Poor Instructions:**
```text theme={null}
Analyze this feedback: {{trigger.feedback_text}}
```
### Input Variables
Pass data from previous nodes to the agent:
```handlebars theme={null}
Customer:
{{trigger.output.customer_name}}
Order ID:
{{trigger.output.order_id}}
Issue:
{{trigger.output.description}}
Please analyze this support ticket and categorize it.
```
### Structured Output (Recommended)
Define the exact structure you want from the agent:
**Why Use Structured Output:**
* Guaranteed format (always valid JSON)
* No parsing errors
* Reliable for downstream nodes
* Easier to debug
**Example:**
```json theme={null}
{
"sentiment": "positive",
"category": "product_feedback",
"priority": "medium",
"summary": "Customer loves the new feature",
"action_needed": false
}
```
**Configure:**
1. Enable "Structured Output"
2. Define output fields:
* Field name
* Type (string, number, boolean, array)
* Description
### Tools & Capabilities
Enable additional capabilities for the agent:
**Web Search**
* Agent can search the internet
* Good for fact-checking and current information
* Adds cost per search
**Code Execution**
* Agent can write and run Python code
* Good for calculations and data analysis
* Safe sandboxed environment
**Integrations**
* Agent can use connected integration actions
* Access to your tools and data
* Good for dynamic workflows
## Example Use Cases
### Content Categorization
```text theme={null}
Agent Configuration:
- Instructions: "Categorize this article by topic and suggest tags"
- Input: {{trigger.article_text}}
- Model: GPT-3.5 Turbo
- Structured Output:
{
"category": "string",
"tags": ["string"],
"confidence": "number"
}
```
### Lead Qualification
```text theme={null}
Agent Configuration:
- Instructions: "Score this lead based on company size, role, and use case"
- Input:
Company: {{trigger.company}}
Role: {{trigger.role}}
Use case: {{trigger.use_case}}
- Model: GPT-4
- Structured Output:
{
"score": "number (0-100)",
"qualification": "hot|warm|cold",
"reasoning": "string"
}
```
### Document Summarization
```text theme={null}
Agent Configuration:
- Instructions: "Summarize this document in 3-5 bullet points"
- Input: {{trigger.document_text}}
- Model: Claude Sonnet
- Structured Output:
{
"summary_points": ["string"],
"key_topics": ["string"],
"word_count": "number"
}
```
### Sentiment Analysis
```text theme={null}
Agent Configuration:
- Instructions: "Analyze sentiment and emotional tone"
- Input: {{trigger.customer_message}}
- Model: GPT-3.5 Turbo
- Structured Output:
{
"sentiment": "positive|neutral|negative",
"emotion": "string",
"confidence": "number"
}
```
## Accessing Agent Output
**Without Structured Output:**
```handlebars theme={null}
{{agent_node_name.output.response}}
```
**With Structured Output:**
```handlebars theme={null}
{{agent_node_name.output.sentiment}}
{{agent_node_name.output.category}}
{{agent_node_name.output.summary}}
{{agent_node_name.output.tags[0]}}
```
## Prompt Engineering Tips
**Be Explicit**
```text theme={null}
❌ "Analyze this text"
✅ "Analyze this customer feedback and categorize as bug, feature request, or question"
```
**Provide Context**
```text theme={null}
You are analyzing customer support tickets for a SaaS company.
Categorize by urgency based on:
- Urgent: System down, data loss, security issue
- High: Blocking user's work
- Medium: Inconvenience but has workaround
- Low: Feature request or question
```
**Use Examples**
```text theme={null}
Categorize these issues:
Example 1: "Can't log in, getting 500 error" → Urgent
Example 2: "How do I export data?" → Low
Now categorize: {{trigger.issue}}
```
**Constrain Output**
```text theme={null}
Respond with ONLY one of these categories: bug, feature, question
Do not explain your reasoning.
```
## Best Practices
For workflows, structured output is almost always better. It prevents
parsing errors and makes data easier to use in subsequent nodes.
Clear, detailed instructions lead to better results. Include examples if the
task is complex.
Agents work best with focused inputs. If processing long documents, consider
extracting relevant sections first.
Agent performance can vary. Test with actual data examples to ensure
consistent results.
Add validation after the agent node to handle unexpected outputs or errors.
## Next Steps
Transform data before/after agent processing
Route based on agent decisions
Optimize agent costs
Learn about using agents in workflows
# Code
Source: https://docs.langdock.com/product/workflows/nodes/code-node
Execute custom JavaScript code for data transformation and custom logic.
## Overview
The Code node lets you write custom JavaScript to transform data, perform calculations, implement complex logic, or handle tasks that other nodes can't.
**Best for**: Data transformations, calculations, custom business logic, data
formatting, and complex data manipulation.
## When to Use Code Node
**Perfect for:**
* Data transformations and formatting
* Mathematical calculations
* Custom business logic
* JSON parsing and manipulation
* Data validation and cleaning
* Date/time operations
**Not ideal for:**
* AI analysis (use Agent node)
* API calls (use HTTP Request node)
* Simple conditions (use Condition node)
## Configuration
**Code Editor**: Write your JavaScript transformation logic
**Access Previous Nodes**: All previous node outputs are available as variables
## Examples
### Calculate Statistics
```javascript theme={null}
// Access data from previous nodes
const scores = agent.scores || [];
// Calculate statistics
const average = scores.reduce((a, b) => a + b, 0) / scores.length;
const max = Math.max(...scores);
const min = Math.min(...scores);
// Return result
return {
average_score: average.toFixed(2),
highest_score: max,
lowest_score: min,
grade: average >= 90 ? "A" : average >= 80 ? "B" : "C"
};
```
### Validate and Clean Data
```javascript theme={null}
// Access form data
const email = trigger.email || "";
const amount = trigger.amount || 0;
// Validate
if (!email.includes("@")) {
throw new Error("Invalid email format");
}
if (amount <= 0) {
throw new Error("Amount must be greater than zero");
}
// Clean and return
return {
email: email.trim().toLowerCase(),
amount: parseFloat(amount.toFixed(2)),
validated: true
};
```
## Accessing Code Output
Use the code node name to access returned values in subsequent nodes:
```handlebars theme={null}
{{code_node_name.output.customer}}
{{code_node_name.output.total}}
{{code_node_name.output.formatted_date}}
{{code_node_name.output.processed_items[0].name}}
```
## Available Functions
The Code node runs in the same secure sandbox environment as custom integrations, giving you access to built-in utility functions:
* **`ld.request()`** - Make HTTP requests
* **`ld.log()`** - Output debugging information
* **Data conversions** - CSV, Parquet, Arrow format conversions
* **Standard JavaScript** - JSON, Date, Math, Array, Object methods
View all available sandbox utilities including data conversions, SQL validation, cryptography, and more.
## Best Practices
Return data as objects for easy access in later nodes. This makes it simple to reference specific values in subsequent nodes using dot notation.
Use `||` or optional chaining to provide default values and prevent errors when data is undefined or null.
Wrap risky operations in try-catch blocks to prevent workflow failures. This allows you to handle errors gracefully and provide meaningful error messages.
Complex logic might be better in an Agent node. Use code nodes for straightforward transformations and calculations, not for tasks requiring intelligence or context understanding.
Document what your code does for future reference. Clear comments help you and your team understand the logic when revisiting the workflow later.
## Next Steps
Use AI for intelligent processing
Fetch external data
# Condition
Source: https://docs.langdock.com/product/workflows/nodes/condition-node
Route workflow execution down different paths based on conditions and logic.
## Overview
The Condition node adds branching logic to your workflow. Based on data from previous nodes, route execution down different paths - like if-then-else statements in code, but visual and no-code.
**Best for**: Approval workflows, priority routing, data validation,
multi-path automations, and decision logic.
## How It Works
1. Add multiple conditions (If, Else if, etc.)
2. Each condition evaluates to true/false using Manual or Prompt AI mode
3. By default, the **first** matching condition is executed
4. Enable "Allow multiple conditions" to execute **all** matching paths
5. Each condition gets its own output handle on the node
## Configuration
### Model Selection
Choose the AI model used for Prompt AI mode conditions. This only applies when using Prompt AI mode—Manual mode doesn't use AI.
### Condition Modes
Each individual condition can use one of two modes:
**Manual Mode**
Write expressions inside `{{ }}` brackets:
```handlebars theme={null}
{{ trigger.output.amount > 1000 }}
{{ agent.output.sentiment === "negative" }}
{{ trigger.output.email.includes("@company.com") }}
```
**Important**: All manual expressions must be wrapped in `{{}}` brackets.
**Prompt AI Mode**
Give natural language instructions for the AI to evaluate:
```text theme={null}
Determine if this customer message requires urgent attention based on:
- Keywords like "urgent", "emergency", "asap"
- Angry or frustrated tone
- Mention of high-priority issues
Context: {{trigger.output.message}}
```
### Allow Multiple Conditions
**Disabled (default)**: First match wins
* Conditions evaluated in order (top to bottom)
* Only the first matching condition executes
* Other conditions are skipped
* Most common use case
**Enabled**: All matching conditions execute
* All conditions are evaluated
* Every condition that returns true executes
* Useful for triggering multiple parallel actions
## Example Use Cases
### Priority Routing (Manual Mode)
```text theme={null}
Condition 1: "If High Priority"
Mode: Manual
Expression: {{ agent.priority === "high" }}
Condition 2: "Else if Medium Priority"
Mode: Manual
Expression: {{ agent.priority === "medium" }}
Condition 3: "Else Low Priority"
Mode: Manual
Expression: {{ true }}
```
### Customer Segmentation (Prompt AI Mode)
```text theme={null}
Condition 1: "If is already a customer"
Mode: Prompt AI
Instructions: Check if {{trigger.email}} exists in our customer database based on {{http_request.customers}}
Condition 2: "Else if is not a customer yet"
Mode: Prompt AI
Instructions: Determine if this is a new prospect
```
### Amount Threshold (Manual Mode)
```text theme={null}
Condition 1: "If Needs Approval"
Mode: Manual
Expression: {{ trigger.amount >= 5000 }}
Condition 2: "Else Auto-Approve"
Mode: Manual
Expression: {{ trigger.amount < 5000 }}
```
## Choosing Between Modes
### Use Manual Mode When:
* Logic is straightforward (checking values, comparing numbers)
* You need predictable, consistent results
* You want to minimize AI credit usage
* Conditions are based on exact data matching
### Use Prompt AI Mode When:
* Logic requires understanding context or nuance
* Evaluating natural language content
* Making subjective judgments
* Combining multiple factors that need interpretation
**Example - When Manual is Better:**
```handlebars theme={null}
{{ trigger.amount > 1000 }} ✅ Simple, clear, no AI needed
```
## Manual Mode Operators
When writing manual expressions, you can use:
**Comparison**: `===`, `!==`, `>`, `<`, `>=`, `<=`\
**Logical**: `&&` (and), `||` (or), `!` (not)\
**String Methods**: `.includes()`, `.startsWith()`, `.endsWith()`\
**Existence**: Check if value exists with `{{ trigger.field }}`
**Examples:**
```handlebars theme={null}
{{ trigger.status === "approved" }}
{{ agent.score > 80 && agent.verified === true }}
{{ trigger.email.includes("@company.com") }}
{{ trigger.tags.includes("urgent") }}
```
## Best Practices
Always add a final condition with `{{ true }}` to catch cases that don't match other conditions.
Conditions are evaluated top to bottom. Put most specific conditions first,
general ones last.
Name conditions clearly: "If High Priority" not "Condition 1". This makes
workflows easier to understand.
Use Manual for simple logic checks. Use Prompt AI for complex evaluations that need context analysis.
## Next Steps
Use AI for complex decisions
Write custom JavaScript logic
Make API calls based on conditions
Learn how to use variables in conditions
# Delay
Source: https://docs.langdock.com/product/workflows/nodes/delay-node
Add a pause to your workflow execution between 1 second and 24 hours.
## Overview
The Delay node pauses workflow execution for a specified duration. All subsequent nodes wait until the delay period completes before continuing. Perfect for polling, waiting for external processes, rate limiting, or implementing retry logic.
**Best for**: Polling APIs, waiting for processing, rate limiting, retry
delays, and scheduled follow-ups.
## Configuration
**Delay Duration**: Set the pause time between 1 second and 24 hours
**Options:**
* Seconds (1-3600)
* Minutes (1-1440)
* Hours (1-24)
## When to Use Delay
**Perfect for:**
* Polling an API until a process completes
* Waiting for external systems to process data
* Rate limiting to avoid API throttling
* Adding retry delays after errors
* Scheduling follow-up actions
* Implementing exponential backoff
**Not ideal for:**
* Long-term scheduling (use Scheduled Trigger instead)
* Delays longer than 24 hours
* Time-based triggers (use Scheduled Trigger)
## Example Use Cases
### Retry with Backoff
```text theme={null}
HTTP Request: Call API
→ [On Error] → Delay: 5 seconds
→ HTTP Request: Retry API call
→ [On Error] → Delay: 15 seconds
→ HTTP Request: Final retry
```
### Waiting for Processing
```text theme={null}
Action: Submit document for processing
→ Delay: 1 minute
→ HTTP Request: Fetch processed document
→ Agent: Analyze results
```
## How It Works
1. Workflow reaches the Delay node
2. Execution pauses for the specified duration
3. After the delay, workflow continues to the next node
4. All subsequent nodes wait for the delay to complete
**Important**: The delay is a real pause in execution. If you set a 1-hour
delay, the workflow will literally wait 1 hour before continuing.
## Use with Loops
Delays are especially useful inside loops to control execution rate:
```text theme={null}
Loop: Process 100 items
Variable: item
→ HTTP Request: Process {{item.id}}
→ Delay: 1 second (prevents rate limiting)
```
**Without delay**: 100 API calls in \~10 seconds (may hit rate limits)\
**With 1-second delay**: 100 API calls in \~100 seconds (stays under limits)
## Limitations
* **Minimum delay**: 1 second
* **Maximum delay**: 24 hours
* **No cancellation**: Once a delay starts, it cannot be interrupted
* **Workflow must stay active**: The workflow continues running during the delay
Long delays keep the workflow execution running. For delays longer than a few
hours, consider using a Scheduled Trigger to restart the workflow instead.
## Cost Considerations
Delays themselves are free, but:
* Workflow execution time includes the delay period
* Long delays keep the workflow "running"
* Consider if a scheduled workflow restart would be more efficient
## Best Practices
Long delays keep workflows running and can impact costs. Use Scheduled Triggers for delays over a few hours.
Workflows have execution time limits. Very long delays may cause the workflow
to timeout.
Start with shorter delays and increase gradually if needed. Don't poll more
frequently than necessary.
Add a comment to the delay node explaining why the pause is needed - your future self will thank you.
## Next Steps
Combine with conditions for smart polling
Poll APIs with delays
Learn about workflow execution flow
Add manual approval steps with delays
# File Search
Source: https://docs.langdock.com/product/workflows/nodes/file-search-node
Search and retrieve information from your knowledge folders to enrich workflows with organizational knowledge.
## Overview
The File Search node queries your knowledge folders to retrieve relevant information and context. Connect your workflow to your organization's knowledge base - search through documents, files, and data stored in knowledge folders to enrich AI responses, validate information, or provide context for decisions.
**Best for**: Knowledge retrieval, document search, context enrichment, RAG
(Retrieval Augmented Generation), and accessing organizational knowledge.
## When to Use File Search
**Perfect for:**
* Searching company documentation and knowledge bases
* Retrieving relevant context for AI agent responses
* Finding specific information across multiple documents
* Implementing RAG (Retrieval Augmented Generation) patterns
* Validating information against internal knowledge
* Enriching workflows with organizational data
**Not ideal for:**
* Real-time web search (use Web Search node)
* Fetching data from external APIs (use HTTP Request node)
* Processing individual files (use direct file attachments)
## Configuration
### Knowledge Folder
Select the knowledge folder to search from your workspace's available folders.
**Options:**
* Choose from connected knowledge folders
* Each folder contains your uploaded documents and files
* Folders can include PDFs, Word docs, spreadsheets, and more
### Search Query
The search query to find relevant information. Supports Manual, Auto, and Prompt AI modes.
**Manual mode examples:**
```handlebars theme={null}
{{trigger.output.customer_question}}
```
```handlebars theme={null}
Find information about {{trigger.output.product_name}} pricing and features
```
**Prompt mode:**
```text theme={null}
Generate a search query to find relevant information about the customer's question: {{trigger.output.question}}
```
### Max Results
The maximum number of relevant results to return (default: 5)
**Recommendations:**
* **1-3 results**: Focused, specific queries
* **5-10 results**: Broader context needed
* **10+ results**: Comprehensive searches (may impact performance)
## How It Works
1. Query is processed against the selected knowledge folder
2. Semantic search finds the most relevant document chunks
3. Results are ranked by relevance score
4. Top N results are returned based on max results setting
5. Retrieved information is available to subsequent nodes
## Example Use Cases
### Customer Support with Knowledge Base
```text theme={null}
Form Trigger (Customer question)
→ File Search: Query knowledge folder with {{trigger.question}}
Knowledge Folder: "Support Documentation"
Max Results: 5
→ Agent: Answer question using search results
Context: {{file_search.output.results}}
Question: {{trigger.question}}
→ Notification: Send answer to customer
```
### Product Information Lookup
```text theme={null}
Integration Trigger (Slack question about product)
→ File Search: Search product knowledge
Knowledge Folder: "Product Information"
Query: {{trigger.message}}
Max Results: 3
→ Agent: Summarize relevant product details
→ Action: Reply in Slack thread
```
### Document Validation
```text theme={null}
Form Trigger (User claim submission)
→ File Search: Find relevant policies
Knowledge Folder: "Company Policies"
Query: "{{trigger.claim_type}} policy requirements"
Max Results: 5
→ Agent: Validate claim against policies
Policies: {{file_search.output.results}}
Claim: {{trigger.claim_details}}
→ Condition: Approved or requires review?
```
## Accessing Search Results
Access the retrieved information in subsequent nodes:
```handlebars theme={null}
{{file_search.output.results}}
{{file_search.output.results[0].content}}
{{file_search.output.results[0].score}}
{{file_search.output.results[0].source}}
```
### Result Structure
Each result contains:
* **content**: The relevant text chunk from the document
* **score**: Relevance score (0-1, higher is more relevant)
* **source**: Source file name and location
* **metadata**: Additional file metadata
**Using in Agent prompts:**
```handlebars theme={null}
Context from knowledge base:
{{file_search.output.results}}
Based on the above context, answer this question:
{{trigger.output.question}}
```
## Limitations
* **Knowledge Folder Scope**: Only searches within the selected knowledge folder
* **Result Quality**: Depends on quality and completeness of uploaded documents
* **Chunk Size**: Large documents are split into chunks; relevant information might span multiple results
* **Real-time Updates**: Document changes require reprocessing before they appear in search results
**Important**: Ensure your knowledge folders are regularly updated with current information for accurate search results.
## Best Practices
More specific queries return more relevant results. Include key terms, product names, or topics rather than generic searches.
Start with 5 results and adjust based on response quality. Too few might miss important context, too many can dilute relevance.
Organize knowledge folders by topic or domain for more targeted searches. Separate technical docs from marketing content.
File Search is most powerful when combined with Agent nodes. The agent can synthesize and interpret the retrieved information.
Test your file search with actual questions users might ask to ensure knowledge folder content is sufficient and queries return relevant results.
Add a condition after file search to handle cases where no relevant results are found. Provide fallback responses or escalation paths.
## Next Steps
Process and synthesize search results with AI
Learn how to set up and manage knowledge folders
Search the internet for current information
Route based on search result quality
# Form Trigger
Source: https://docs.langdock.com/product/workflows/nodes/form-trigger
Start workflows from custom form submissions with built-in validation and public access options.
## Overview
The Form Trigger creates a custom web form that starts your workflow when submitted. It's perfect for collecting information from users - whether they're internal team members or external customers - and automatically processing that data.
**Best for**: Intake forms, data collection, customer requests, feedback
gathering, and application submissions.
## When to Use Form Trigger
**Perfect for:**
* Customer feedback or support request forms
* Internal request forms (IT tickets, access requests)
* Application or registration forms
* Survey responses that need processing
* Data collection from non-technical users
**Not ideal for:**
* System-to-system integrations (use Webhook Trigger)
* Scheduled recurring tasks (use Scheduled Trigger)
* Processing existing data (use Manual Trigger)
## Configuration
### Basic Setup
1. **Form Title**: Give your form a descriptive name
2. **Description**: Optional subtitle or instructions
3. **Thank You Message**: Message shown after successful submission
### Field Types
Add fields to collect specific data:
| Field Type | Description | Use Case |
| --------------- | ------------------------------ | -------------------------------- |
| **Text** | Single-line text input | Name, email, title |
| **Long Text** | Multi-line text area | Feedback, descriptions, comments |
| **Number** | Numeric input | Quantity, amount, rating |
| **Email** | Email with validation | Contact information |
| **Phone** | Phone number input | Contact information |
| **Date** | Date picker | Due dates, event dates |
| **Dropdown** | Select from predefined options | Category, priority, department |
| **Checkbox** | True/false selection | Agreements, preferences |
| **File Upload** | Attachment upload | Documents, images, PDFs |
### Field Configuration
For each field, configure:
* **Field Name**: Internal identifier (use snake\_case: `customer_name`)
* **Label**: Display text shown to users
* **Description**: Optional help text
* **Required**: Whether the field must be filled
## How It Works
1. Form URL is generated automatically
2. User fills out the form fields
3. Form validates all required fields and formats
4. On submission, workflow starts with form data
5. User sees the thank you message
6. Form data is available in the workflow via `{{form.field_name}}`
## Example Use Cases
### Customer Feedback Form
```text theme={null}
Form Trigger
- name (text, required)
- email (email, required)
- feedback (long text, required)
- product (dropdown: App, Web, API)
→ Agent: Analyze sentiment and categorize
→ Condition: Check priority
→ High: Notify product team
→ Low: Store in database
→ Action: Create ticket in support system
```
### IT Support Request
```text theme={null}
Form Trigger
- requester_name (text, required)
- department (dropdown)
- issue_type (dropdown: Hardware, Software, Access)
- description (long text, required)
- urgency (dropdown: Low, Medium, High)
→ Agent: Classify issue and suggest solution
→ Action: Create JIRA ticket
→ Notification: Alert IT team
```
### Job Application
```text theme={null}
Form Trigger
- applicant_name (text, required)
- email (email, required)
- position (dropdown)
- resume (file upload, required)
- cover_letter (long text)
→ Agent: Extract candidate info from resume
→ Agent: Screen against job requirements
→ Condition: Check fit score
→ Strong: Schedule interview
→ Moderate: Add to review queue
→ Weak: Send polite rejection
```
## Accessing Form Data
Use the `trigger` variable to access submitted data:
```handlebars theme={null}
Customer Name:
{{form1.output.name}}
Email:
{{form.output.email}}
Feedback:
{{form.output.feedback}}
Selected Product:
{{form.output.product}}
```
### File Uploads
For file upload fields, access the file metadata:
```handlebars theme={null}
File name:
{{form.output.metadata.filename}}
Mime-Type URL:
{{form.output.metadata.mimeType}}
File size:
{{form.output.metadata.size}}
```
## Sharing Your Form
### Copy Form URL
1. Click on the Form Trigger node
2. Click "Copy URL" in the node toolbar
3. Share the URL via email, chat, or embed on your website
4. The URL is publicly available when it was configured like that
### Embedding Options
**Direct Link**
```html theme={null}
Submit Feedback
```
**iFrame Embed**
```html theme={null}
```
## Best Practices
Only ask for essential information. Long forms have higher abandonment
rates. You can always collect additional details later in the workflow.
Field labels should clearly indicate what information is needed. Add
description text for fields that might be confusing.
Pre-fill fields with sensible defaults when possible to reduce user effort.
Submit test forms yourself to ensure the experience is smooth and instructions
are clear.
## Next Steps
Receive HTTP POST requests from external systems
Process form submissions with AI
Step-by-step tutorial with form example
Learn about configuring form fields
# Guardrails
Source: https://docs.langdock.com/product/workflows/nodes/guardrails-node
Validate AI outputs and workflow data with automated checks for safety, accuracy, and compliance.
## Overview
The Guardrails node validates content using AI-powered checks to ensure safety, accuracy, and compliance. Each guardrail uses an LLM as a judge to evaluate your input against specific criteria, failing the workflow if confidence thresholds are exceeded.
**Best for**: Content moderation, PII detection, hallucination checks, jailbreak prevention, and custom validation rules.
## How It Works
1. Provide input content to validate (from previous nodes)
2. Enable specific guardrail checks
3. Set confidence threshold for each check (0-1)
4. Choose AI model for evaluation
5. If any check exceeds threshold → Node fails and flags the issue
## Configuration
### Input
The content you want to validate. Supports Manual, Auto, and Prompt AI modes.
**Example:**
```handlebars theme={null}
{{agent.output.response}}
{{trigger.output.user_message}}
{{http_request.output.content}}
```
### Model Selection
Choose the AI model used to evaluate all enabled guardrails. More capable models provide more accurate detection but cost more.
## Available Guardrails
### Personally Identifiable Information (PII)
Detects personal information like names, emails, phone numbers, addresses, SSNs, credit cards, etc.
**When to use:**
* Before storing user-generated content
* When sharing data externally
* Compliance requirements (GDPR, HIPAA)
* Customer service workflows
**Configuration:**
* **Confidence Threshold**: 0.7 (recommended)
* Higher threshold = stricter detection
**Example:**
```text theme={null}
Input: {{agent.output.customer_response}}
Threshold: 0.8
Result: Fails if PII detected with >80% confidence
```
***
### Moderation
Checks for inappropriate, harmful, or offensive content including hate speech, violence, adult content, harassment, etc.
**When to use:**
* User-generated content platforms
* Public-facing communications
* Community moderation
* Customer-facing outputs
**Configuration:**
* **Confidence Threshold**: 0.6 (recommended)
* Adjust based on your content policies
***
### Jailbreak Detection
Identifies attempts to bypass AI safety controls or manipulate the AI into unintended behaviors.
**When to use:**
* Processing user prompts before sending to AI
* Public AI interfaces
* Workflows with user-provided instructions
* Security-sensitive applications
**Configuration:**
* **Confidence Threshold**: 0.7 (recommended)
* Higher threshold for fewer false positives
**Example:**
```text theme={null}
Input: {{trigger.user_prompt}}
Threshold: 0.75
Flags: Attempts to "ignore previous instructions" or similar
```
***
### Hallucination Detection
Detects when AI-generated content contains false or unverifiable information.
**When to use:**
* Fact-based content generation
* Customer support responses
* Financial or medical information
* Any workflow where accuracy is critical
**Configuration:**
* **Confidence Threshold**: 0.6 (recommended)
* Requires reference data for comparison
**Example:**
```text theme={null}
Input: {{agent.generated_summary}}
Reference: {{http_request.original_data}}
Threshold: 0.7
Checks: Does summary accurately reflect source data?
```
***
### Custom Evaluation
Define your own validation criteria using natural language instructions.
**When to use:**
* Domain-specific validation
* Brand voice compliance
* Custom business rules
* Specialized content requirements
**Configuration:**
* **Evaluation Criteria**: Describe what to check for
* **Confidence Threshold**: Set based on strictness needed
**Example:**
```text theme={null}
Criteria: "Check if this response maintains our brand voice:
- Professional but friendly tone
- No jargon or technical terms
- Addresses customer by name
- Offers clear next steps"
Input: {{agent.email_response}}
Threshold: 0.8
```
## Setting Confidence Thresholds
The confidence threshold determines how strict each check is:
| Threshold | Behavior | Use When |
| ----------- | ----------- | ----------------------------------------- |
| **0.3-0.5** | Lenient | Avoid false positives, informational only |
| **0.6-0.7** | Balanced | Most use cases, good accuracy |
| **0.8-0.9** | Strict | High-risk scenarios, critical validation |
| **0.9-1.0** | Very Strict | Only flag very obvious violations |
Start with **0.7** as a balanced default, then adjust based on false positives or missed detections.
## Example Workflows
### Content Moderation Pipeline
```text theme={null}
Trigger: Form submission (user comment)
→ Guardrails:
✅ PII Detection (threshold: 0.8)
✅ Moderation (threshold: 0.6)
Input: {{trigger.comment}}
→ [On Success] → Post comment publicly
→ [On Failure] → Send to manual review queue
```
### AI Response Validation
```text theme={null}
Agent: Generate customer response
→ Guardrails:
✅ Hallucination (threshold: 0.7)
✅ Custom: "Professional and helpful tone"
Input: {{agent.response}}
→ [On Success] → Send email to customer
→ [On Failure] → Regenerate with different prompt
```
### Multi-Check Validation
```text theme={null}
Agent: Generate article summary
→ Guardrails:
✅ PII Detection (threshold: 0.8)
✅ Hallucination (threshold: 0.7)
✅ Custom: "No promotional language" (threshold: 0.75)
Input: {{agent.summary}}
→ [On Success] → Publish to website
→ [On Failure] → Return to editor for revision
```
## Handling Failures
When a guardrail check fails, the workflow stops at the Guardrails node. You can configure error handling to route to alternative paths, send notifications, or trigger fallback actions.
## When to Use Each Guardrail
Use PII detection for:
* Public content that shouldn't contain personal information
* Data being sent to third parties or external systems
* Compliance-sensitive workflows (GDPR, HIPAA, etc.)
* Preventing accidental exposure of sensitive user data
Use moderation for:
* User-generated content that needs review
* Public-facing outputs and communications
* Community platforms and forums
* Filtering inappropriate or harmful content
Use jailbreak detection for:
* User-provided prompts or instructions to AI
* Public AI interfaces accessible to external users
* Security-critical applications where prompt manipulation is a risk
* Protecting against attempts to bypass system constraints
Use hallucination detection for:
* Fact-based content generation requiring accuracy
* Customer support responses with specific information
* Financial or medical information where accuracy is critical
* Any content where false information could cause harm
Use custom evaluation for:
* Brand compliance and tone of voice guidelines
* Domain-specific rules and industry standards
* Quality standards unique to your organization
* Business-specific requirements not covered by other guardrails
## Best Practices
Use multiple guardrails together for comprehensive validation. PII + Moderation is a common combination.
Begin with 0.7 and adjust based on results. Too low = false positives, too high = missed issues.
Don't just fail the workflow—add error paths to notify teams, log violations, or trigger alternative actions.
Test guardrails with borderline content to calibrate thresholds correctly.
More capable models (GPT-4) provide better detection but cost more. Balance accuracy needs with budget.
Write clear, specific criteria for custom evaluations so the AI understands exactly what to check.
## Next Steps
Validate AI-generated content
Route based on validation results
Add manual review for sensitive content
Build your first workflow with validation
# HTTP Request
Source: https://docs.langdock.com/product/workflows/nodes/http-request-node
Make HTTP requests to external APIs for custom integrations and data fetching.
## Overview
The HTTP Request node lets you call any external API - fetch data, send updates, trigger actions, or integrate with services that don't have native integrations.
**Best for**: Custom API integrations, fetching external data, sending
webhooks, and connecting to any HTTP-based service.
## Configuration
### Import from cURL
Click "Import from cURL" to paste a cURL command and automatically populate all fields (URL, method, headers, parameters). Great for quickly setting up requests from API documentation.
### URL (Required)
The API endpoint to call. Supports Auto, Manual, and Prompt AI modes.
**Manual mode example:**
```handlebars theme={null}
https://api.example.com/users/{{trigger.output.user_id}}/orders
```
### Method
Select the HTTP method:
* **GET**: Fetch data
* **POST**: Create new resources
* **PUT**: Replace existing resources
* **PATCH**: Update existing resources
* **DELETE**: Remove resources
### Headers
Add custom headers as key-value pairs. Common headers:
**Authentication:**
```text theme={null}
Key: Authorization
Value: Bearer {{trigger.output.api_token}}
```
**Content Type:**
```text theme={null}
Key: Content-Type
Value: application/json
```
Click "Add header" to include multiple headers.
### Query Parameters
Add URL query parameters as key-value pairs instead of including them in the URL.
**Example:**
```text theme={null}
URL: https://api.example.com/search
Parameters:
- Key: query, Value: {{trigger.output.search_term}}
- Key: limit, Value: 10
Results in: https://api.example.com/search?query=laptops&limit=10
```
### Body (POST/PUT/PATCH only)
The request payload, typically JSON format. Supports variables from previous nodes.
```json theme={null}
{
"name": "{{trigger.output.name}}",
"email": "{{trigger.output.email}}",
"status": "{{agent.output.category}}",
"metadata": {
"source": "workflow",
"processed_at": "{{trigger.output.timestamp}}"
}
}
```
## Example Use Cases
### Fetch User Data (GET)
```text theme={null}
Method: GET
URL: https://api.crm.com/users/{{trigger.user_id}}
Headers:
- Authorization: Bearer YOUR_TOKEN
```
### Create Record (POST)
```text theme={null}
Method: POST
URL: https://api.system.com/records
Headers:
- Content-Type: application/json
Body:
{
"title": "{{trigger.output.title}}",
"category": "{{agent.output.category}}",
"priority": "{{agent.output.priority}}"
}
```
### Search with Parameters (GET)
```text theme={null}
Method: GET
URL: https://api.example.com/search
Query Parameters:
- q: {{trigger.search_term}}
- limit: 20
- format: json
```
### Update Status (PATCH)
```text theme={null}
Method: PATCH
URL: https://api.app.com/items/{{trigger.id}}
Headers:
- Content-Type: application/json
Body:
{
"status": "completed",
"updated_by": "workflow"
}
```
## Accessing Response Data
After the HTTP Request executes, access the response in subsequent nodes:
```handlebars theme={null}
{{http_node.output.status}} → Status code (200, 404, etc.)
{{http_node.output.data}} → Response body
{{http_node.output.data.user.name}} → Nested response data
{{http_node.output.data.items[0].id}} → Array items
{{http_node.output.headers}} → Response headers
```
### Response Status Codes
Use the status code to check if the request succeeded:
```handlebars theme={null}
{{ http_node.output.status === 200 }} → Success
{{ http_node.output.status >= 400 }} → Error occurred
```
## Best Practices
If you have a working cURL command from API docs, use "Import from cURL" to automatically set up all fields correctly.
Always add error handling. Use a Condition node after the HTTP Request to check `{{ http_node.status === 200 }}`.
Add query parameters in the Parameters section instead of hardcoding them in
the URL. This makes them easier to manage.
Use the node's test button to verify the request works before building the rest of your workflow.
## Next Steps
Receive HTTP requests
Transform API responses
# Integration Trigger
Source: https://docs.langdock.com/product/workflows/nodes/integration-trigger
Start workflows automatically when events occur in your connected applications.
## Overview
The Integration Trigger connects your workflows to real-time events from your connected applications. When something happens in Slack, Google Sheets, your CRM, or any other integrated service, your workflow springs into action automatically.
**Best for**: Responding to events in connected apps, real-time automation,
cross-platform workflows, and event-driven processes.
## When to Use Integration Trigger
**Perfect for:**
* New Slack messages in specific channels
* New or updated rows in Google Sheets
* Emails received in specific folders
* Calendar events created or updated
* New files added to folders (Drive, Dropbox, OneDrive)
* CRM record changes (new leads, updated deals)
* Project management updates (new tasks, status changes)
**Not ideal for:**
* Custom API integrations (use Webhook Trigger)
* Scheduled recurring tasks (use Scheduled Trigger)
* User-submitted forms (use Form Trigger)
## Configuration
### Step 1: Select Integration
Choose from your workspace's connected integrations:
* **Communication**: Slack, Microsoft Teams, Gmail
* **Productivity**: Google Sheets, Notion, Asana, Jira
* **Storage**: Google Drive, OneDrive, Dropbox
* **CRM**: Salesforce, HubSpot
* **Calendar**: Google Calendar, Outlook Calendar
* And many more...
### Step 2: Choose Event Type
Each integration offers specific trigger events:
**Slack**
* New message in channel
* New direct message
* Message with specific emoji reaction
* User joined channel
**Google Sheets**
* New row added
* Row updated
* Row deleted
* Spreadsheet opened
**Gmail**
* New email received
* Email in specific folder
* Email with specific label
* Email matching search criteria
**Google Calendar**
* Event created
* Event updated
* Event starting soon
* Event canceled
### Step 3: Configure Event Filters
Narrow down which events trigger your workflow:
**For Slack:**
* Specific channels only
* Messages from certain users
* Messages containing keywords
* Messages with attachments
**For Google Sheets:**
* Specific spreadsheet and sheet
* Rows matching criteria
* Changes to specific columns
**For Email:**
* From specific senders
* With specific subjects
* To specific addresses
* With attachments
### Step 4: Connect Account
If not already connected:
1. Click "Connect Account"
2. Authorize Langdock to access the integration
3. Select the specific account (if multiple)
4. Grant necessary permissions
## Example Use Cases
### Slack → Ticket Creation
```text theme={null}
Integration Trigger (New message in #support channel)
→ Agent: Extract issue details from message
→ Action: Create JIRA ticket
→ Action: Reply in Slack thread with ticket number
```
### Google Sheets → Data Processing
```text theme={null}
Integration Trigger (New row in "Leads" sheet)
→ HTTP Request: Enrich lead data from external API
→ Agent: Score lead quality
→ Condition: Check score
→ High: Notify sales team
→ Low: Add to nurture campaign
→ Action: Update row with score and status
```
### Gmail → Document Processing
```text theme={null}
Integration Trigger (New email with attachment)
→ Agent: Extract and summarize document content
→ Action: Save summary to Google Drive
→ Action: Create task in project management tool
→ Action: Reply to email confirming receipt
```
### Calendar → Meeting Prep
```text theme={null}
Integration Trigger (Event starting in 1 hour)
→ HTTP Request: Fetch meeting context from CRM
→ Agent: Generate meeting brief
→ Action: Send Slack message to attendees
→ Action: Create Google Doc with meeting notes template
```
### Drive → Content Approval
```text theme={null}
Integration Trigger (New file in "Pending Approval" folder)
→ Agent: Review content against guidelines
→ Condition: Check approval recommendation
→ Approved: Move to "Published" folder
→ Needs Changes: Send feedback to creator
→ Notification: Alert stakeholders of decision
```
## Accessing Integration Data
Use the `trigger` variable to access event data:
### Example: Gmail Event Data
```handlebars theme={null}
Subject:
{{trigger.output.subject}}
From:
{{trigger.output.from}}
Body:
{{trigger.output.body}}
Has Attachments:
{{trigger.output.has_attachments}}
Labels:
{{trigger.output.labels}}
```
## Testing Integration Triggers
### Test Panel
1. Click on the Integration Trigger node
2. Click "Test" in the toolbar
3. View recent events from the integration
4. Select an event to test with
5. Run the workflow with that event's data
### Trigger Real Events
The best way to test:
1. Manually create the event in the integration
* Send a Slack message
* Add a row to Google Sheets
* Create a calendar event
2. Wait for the event to trigger (usually within seconds)
3. Check the Runs tab for execution
4. Review the workflow results
## Common Integration Patterns
### Bidirectional Sync
```text theme={null}
Integration Trigger (Sheets updated)
→ Code: Transform data
→ HTTP Request: Update external system
→ Condition: Check if update successful
→ Success: Update status in Sheets
→ Failure: Notify admin
```
## Best Practices
Use integration filters to only trigger on relevant events. Processing every
Slack message is expensive and slow.
Some integrations may send duplicate events. Add logic to detect and skip
duplicates using unique IDs.
Integration permissions can expire. Set up alerts for authentication
failures.
Don't just use test data - trigger real events in the integration and verify
your workflow handles them correctly.
If an integration event happens frequently (e.g., many Slack messages),
consider adding conditions to prevent overwhelming your workflow.
Store original event data early in the workflow in case you need to
reference it later.
## Next Steps
Perform actions in integrated apps
Route based on integration data
Process integration data with AI
Trigger workflows with custom webhooks
# Loop
Source: https://docs.langdock.com/product/workflows/nodes/loop-node
Iterate over arrays and process multiple items with the same logic.
## Overview
The Loop node processes arrays of data - iterate through lists of customers, orders, files, or any collection, applying the same logic to each item.
**Best for**: Batch processing, processing multiple records, generating
individual reports, and iterating over lists.
## Configuration
**Input Array**: Select the array to loop over
```handlebars theme={null}
{{trigger.customers}}
{{api_response.items}}
{{google_sheets.rows}}
```
**Loop Variable Name**: Name for current item (e.g., `customer`, `item`, `record`)
**Max Iterations**: Safety limit (default: 100)
## Inside the Loop
Access current item with your loop variable name:
```handlebars theme={null}
{{customer.name}}
{{customer.email}}
{{customer.status}}
```
## Example Use Cases
### Process Customer List
```text theme={null}
Loop over {{trigger.customers}}
Variable: customer
→ Agent: Analyze {{customer.feedback}}
→ Condition: Check {{customer.score}}
→ High: Send thank you email
→ Low: Create follow-up task
```
### Batch Update Records
```text theme={null}
Loop over {{api_response.records}}
Variable: record
→ Code: Transform {{record.data}}
→ HTTP Request: Update record {{record.id}}
```
### Generate Individual Reports
```text theme={null}
Loop over {{trigger.team_members}}
Variable: member
→ HTTP Request: Fetch {{member.id}} data
→ Agent: Generate report for {{member.name}}
→ Action: Email report to {{member.email}}
```
## Best Practices
Prevent infinite loops and runaway costs. Set a reasonable maximum.
Instead of 100 individual agent calls, batch items into groups of 10.
Add a condition before the loop to check if array has items.
Loops with AI agents can be expensive. Calculate: cost per item × number of
items.
## Cost Warning
Loops can consume significant credits when processing many items with AI
agents. A loop with 100 items and an agent call at $0.10 each = $10 per run.
## Next Steps
Process items with AI
Optimize loop costs
# Manual Trigger
Source: https://docs.langdock.com/product/workflows/nodes/manual-trigger
Run workflows on-demand with a button click for testing and ad-hoc processing.
## Overview
The Manual Trigger allows you to start workflows on-demand with a button click. It's the simplest trigger type - perfect for testing workflows during development or for workflows that should only run when explicitly invoked.
**Best for**: Testing workflows, ad-hoc data processing, and workflows that
require human initiation.
## When to Use Manual Trigger
**Perfect for:**
* Testing and debugging workflows during development
* On-demand data processing that requires human judgment
* Workflows triggered by users through your application
* Administrative tasks that should be manually initiated
**Not ideal for:**
* Automated, recurring processes (use Scheduled Trigger)
* Responding to external events (use Webhook or Integration Trigger)
* Collecting data from users (use Form Trigger)
## Configuration
The Manual Trigger requires no configuration - it's ready to use immediately.
## Example Use Cases
### Ad-Hoc Data Analysis
```text theme={null}
Manual Trigger (with date range input)
→ HTTP Request: Fetch data for date range
→ Agent: Analyze data and generate insights
→ Notification: Send report to requester
```
**Why Manual?** The analysis is needed sporadically and requires human judgment on which date range to analyze.
### Administrative Tasks
```text theme={null}
Manual Trigger
→ Code: Generate system report
→ Action: Archive old records
→ Notification: Confirm completion
```
**Why Manual?** These are maintenance tasks that should only run when an admin explicitly initiates them.
### Testing Integrations
```text theme={null}
Manual Trigger
→ HTTP Request: Test API endpoint
→ Agent: Validate response
→ Notification: Send test results
```
**Why Manual?** Used during development to test API integrations before setting up automated triggers.
Use the Manual Trigger for initial development and testing of all workflows,
even if you plan to switch to a different trigger type later.
## Limitations
* **No Automation**: Requires human action to initiate
* **No Scheduling**: Cannot run on a regular schedule
* **No Event Response**: Cannot react to external events or integrations
## Best Practices
Start every workflow with a Manual Trigger during development. Once tested
and working, switch to the appropriate automated trigger type.
If your workflow needs input data, use clear field labels and descriptions so
users know exactly what to provide.
Add a workflow description explaining when and why someone should manually
trigger it.
Manual triggers can be run by anyone with access to the workflow. Use
sharing settings to control who can execute them.
## Next Steps
Collect data from users with custom forms
Run workflows automatically on a schedule
Build your first workflow
Collect data with custom forms
# Send Notification
Source: https://docs.langdock.com/product/workflows/nodes/notification-node
Send notifications to team members via in-app alerts.
## Overview
The Send Notification node sends alerts directly to your Langdock inbox. Create custom messages to notify yourself about workflow events, completion status, important data, or when something needs attention.
**Best for**: Workflow completion alerts, error notifications, status updates,
data summaries, and custom alerts requiring attention.
## Configuration
**Message**: Custom notification message (supports variables and markdown)
You can include any data from previous nodes using variables to create rich, contextual notifications.
## Example Notifications
### High Priority Alert
```handlebars theme={null}
🚨 **High Priority Customer Feedback** A high-priority support request was
received: **Customer:**
{{trigger.output.customer_name}}
**Email:**
{{trigger.output.email}}
**Category:**
{{agent.output.category}}
**Sentiment:**
{{agent.output.sentiment}}
**Summary:**
{{agent.output.summary}}
**Action Required:** Please respond within 1 hour.
```
### Processing Complete
```handlebars theme={null}
✨ **Batch Processing Complete** Successfully processed
**Success:**
{{code.output.success_count}}
**Errors:**
{{code.output.error_count}}
**Duration:**
{{code.output.duration_minutes}}
minutes Check the logs for details.
```
## Markdown Formatting
Make your notifications easier to read with markdown:
```handlebars theme={null}
# Important Alert **Bold text** for emphasis *Italic text* for notes - Bullet
point 1 - Bullet point 2 - Bullet point 3 [Link to
dashboard](https://app.example.com/dashboard)
```
## Best Practices
Include essential information but don't overwhelm. Use bullet points for multiple items.
Include relevant IDs, names, or links so you can quickly take action.
Structure your message with headers and sections for easy scanning.
Tell yourself what to do next: "Review the dashboard", "Respond to customer",
etc.
Run test workflows to see how notifications appear in your inbox.
## Next Steps
Learn how to use variables in notifications
Send messages via integrations
Generate notification content with AI
Understand workflow fundamentals
# Scheduled Trigger
Source: https://docs.langdock.com/product/workflows/nodes/scheduled-trigger
Run workflows automatically on a recurring schedule with flexible timing options.
## Overview
The Scheduled Trigger runs your workflow automatically at specified times or intervals. Whether you need daily reports, hourly data syncs, or monthly cleanup tasks, scheduled workflows handle recurring automations reliably.
**Best for**: Daily reports, periodic data syncs, recurring maintenance tasks,
scheduled analysis, and time-based automations.
## When to Use Scheduled Trigger
**Perfect for:**
* Daily, weekly, or monthly reports
* Periodic data synchronization between systems
* Scheduled data analysis and insights
* Recurring maintenance or cleanup tasks
* Time-based monitoring and alerts
**Not ideal for:**
* Event-driven workflows (use Webhook or Integration Trigger)
* User-initiated processes (use Manual or Form Trigger)
* Real-time data processing
## Configuration
### Schedule Options
**Quick Schedules** (Visual Builder)
* Every few minutes
* Every hour
* Every day at specific time
* Every week on specific days
* Every month on specific date
* Prompt custom time interval
### Timezone
Set the timezone for your schedule:
* Defaults to your account timezone
* Supports all standard timezones
* Critical for globally distributed teams
## Example Use Cases
### Daily Sales Report
```text theme={null}
Scheduled Trigger (Daily at 9 AM)
→ HTTP Request: Fetch yesterday's sales data
→ Code: Calculate metrics (total, growth, top products)
→ Agent: Generate executive summary
→ Notification: Email report to sales team
```
### Hourly Data Sync
```text theme={null}
Scheduled Trigger (Every hour)
→ HTTP Request: Fetch new records from API
→ Loop: For each record
→ Code: Transform data format
→ HTTP Request: POST to destination system
→ Notification: Summary of records synced
```
### Weekly Cleanup Task
```text theme={null}
Scheduled Trigger (Weekly on sunday)
→ HTTP Request: Fetch records older than 90 days
→ Loop: For each old record
→ HTTP Request: Archive to cold storage
→ HTTP Request: Delete from active database
→ Notification: Cleanup summary
```
### Monthly Billing
```text theme={null}
Scheduled Trigger (1st of the month)
→ HTTP Request: Get all active subscriptions
→ Loop: For each subscription
→ Code: Calculate billing amount
→ HTTP Request: Create invoice
→ Action: Send invoice email
→ Notification: Billing run complete
```
## Testing Scheduled Workflows
### Manual Test Runs
1. Click on the Scheduled Trigger node
2. Click "Test" to trigger a one-time run
3. Review execution with current timestamp
4. Verify time-based logic works correctly
## Best Practices
Schedule resource-intensive workflows during off-peak hours (nights,
weekends) to minimize impact on systems.
Always include error notifications for scheduled workflows. You won't be
watching when they run.
If your system is down during a scheduled run, decide if you need to catch
up or skip the missed execution.
Test your workflow with different timestamps to ensure time-based logic
works correctly (e.g., end of month, leap years).
Don't schedule more frequently than needed. Every 5 minutes might be
excessive - consider if hourly would work.
Track how long your scheduled workflows take. If execution time is close to
the interval, you risk overlaps.
## Troubleshooting
**Check:** - Workflow is deployed (not draft) - Workflow is active (not
paused) - Schedule is correctly configured
**Check:** - Timezone setting matches expectation - Cron expression is
correct - Daylight saving time changes (use UTC to avoid)
**Solutions:** - Break into smaller workflows - Increase schedule interval -
Optimize slow nodes (batch AI calls, parallel requests)
## Next Steps
Trigger from native integration events
Calculate time-based logic and date ranges
Control and optimize workflow costs
Build your first workflow
# Web Search
Source: https://docs.langdock.com/product/workflows/nodes/web-search-node
Search the internet and retrieve relevant information for fact-checking and research.
## Overview
The Web Search node searches the internet and returns relevant results. Perfect for fact-checking, gathering current information, market research, or finding specific data that changes frequently.
**Best for**: Fact-checking, current events, market research, finding specific
information, and gathering context.
## Configuration
**Query**: Define the search query (can include variables)
**Mode**:
* **Automatic**: AI generates optimal search query
* **Manual**: You specify exact search terms
* **Prompt**: Write instructions for AI to generate query
**Number of Results**: How many results to return (default: 5)
## Example Queries
**Manual Query**
```handlebars theme={null}
{{trigger.output.company_name}} recent news 2024
```
**Automatic Mode**
```text theme={null}
Context: {{trigger.customer_message}}
Find: Recent information about the product mentioned
```
**Prompt Mode**
```text theme={null}
Generate a search query to find the latest pricing information for {{trigger.product_name}} from competitor websites.
```
## Accessing Results
```handlebars theme={null}
{{web_search.output.title}}
{{web_search.output.url}}
{{web_search.output.snippet}}
```
## Example Use Cases
**Fact Checking**
```text theme={null}
Web Search: {{agent.output.claim}}
→ Agent: Verify claim against search results
→ Condition: Is claim accurate?
```
**Company Research**
```text theme={null}
Web Search: "{{trigger.output.company_name}}" latest news
→ Agent: Summarize key findings
→ Action: Update CRM with insights
```
**Competitive Analysis**
```text theme={null}
Web Search: {{trigger.output.product}} alternatives pricing
→ Agent: Extract pricing information
→ Code: Compare against our pricing
```
## Next Steps
Analyze search results with AI
Fetch specific URLs
# Webhook Trigger
Source: https://docs.langdock.com/product/workflows/nodes/webhook-trigger
Receive HTTP POST requests to trigger workflows and integrate with external systems.
## Overview
The Webhook Trigger provides a unique HTTP endpoint that external systems can call to start your workflow. It's the bridge between Langdock Workflows and any external service or application that can send HTTP requests.
**Best for**: Real-time integrations, external system events, API-driven
workflows, and connecting services without native integrations.
## When to Use Webhook Trigger
**Perfect for:**
* Receiving events from external services (GitHub, Stripe, custom apps)
* Real-time data processing from external systems
* Building custom integrations
* Connecting services that support webhooks (including other workflows)
* API-driven workflows initiated by other systems
**Not ideal for:**
* User-facing data collection (use Form Trigger)
* Scheduled recurring tasks (use Scheduled Trigger)
* Native integration events (use Integration Trigger)
## Configuration
### Basic Setup
When you add a Webhook Trigger, you automatically get:
* **Unique Webhook URL**: A secure endpoint for receiving requests
* **Webhook ID**: Identifier for your webhook
### Security Options
**Secret Authentication:**
Configure a secret to secure your webhook endpoint:
1. **Add a Secret** (optional)
* Set a secret value in the webhook configuration
* Include this secret in the request header or body when calling the webhook
* Only requests with the correct secret will trigger the workflow
2. **No Secret** (default)
* Webhook is publicly accessible
* Anyone with the URL can trigger it
* Good for testing and low-security use cases
**Best Practice:** Always use a secret for production webhooks to prevent unauthorized access
## How It Works
1. External system sends HTTP POST request to webhook URL
2. Webhook validates authentication (if configured)
3. Request payload is parsed (JSON)
4. Workflow starts with payload data available as `{{trigger}}`
5. Webhook responds immediately with 200 OK
6. Workflow processes asynchronously
**Important**: Webhooks respond immediately (within \~100ms) and process the
workflow asynchronously. Don't rely on the webhook response for workflow
results.
## Making Requests to Your Webhook
### Basic Request
```bash theme={null}
curl -X POST https://app.langdock.com/api/workflows/webhooks/abc123 \
-H "Content-Type: application/json" \
-d '{"key": "value"}'
```
## Example Use Cases
### GitHub Webhook Integration
```text theme={null}
Webhook Trigger (GitHub push events)
→ Agent: Analyze commit messages
→ Condition: Check if documentation updated
→ Yes: Regenerate docs
→ No: Continue
→ Notification: Send deployment status
```
**GitHub Webhook Configuration:**
* URL: Your webhook URL
* Events: Push, Pull Request
* Content type: application/json
### Stripe Payment Webhook
```text theme={null}
Webhook Trigger (Stripe events)
→ Code: Validate Stripe signature
→ Condition: Check event type
→ payment_succeeded: Update user account
→ payment_failed: Send retry notification
→ subscription_canceled: Deactivate access
```
### Custom Application Integration
```text theme={null}
Webhook Trigger
→ Code: Validate and transform data
→ HTTP Request: Enrich data from external API
→ Agent: Analyze and categorize
→ Action: Create record in CRM
```
### Slack Command Integration
```text theme={null}
Webhook Trigger (from Slack slash command)
→ Agent: Process natural language command
→ HTTP Request: Execute action in external system
→ HTTP Response: Send result back to Slack
```
## Accessing Webhook Data
Access the webhook payload using the `trigger` variable:
```handlebars theme={null}
{{trigger.output.user_id}}
{{trigger.output.event_type}}
{{trigger.output.data.amount}}
```
Access in workflow:
```handlebars theme={null}
Event: {{trigger.output.event}}
Order: {{triggeroutput.order_id}}
Customer: {{trigger.output.customer_name}}
First Item: {{trigger.output.items[0].product}}
```
## Response Codes
| Code | Meaning | When It Happens |
| ---- | ------------ | --------------------------------------- |
| 200 | Success | Workflow triggered successfully |
| 400 | Bad Request | Invalid JSON or missing required fields |
| 401 | Unauthorized | Authentication failed |
| 403 | Forbidden | Workflow is paused or inactive |
| 500 | Server Error | Internal error processing webhook |
## Next Steps
Use native integration events
Make requests to external APIs
Validate and transform webhook data
Build your first workflow
# Variable Usage
Source: https://docs.langdock.com/product/workflows/variable-usage
Learn how to access, reference, and work with variables across your workflow nodes to build dynamic and powerful automations.
## Introduction
Variables are the lifeblood of your workflows—they carry data between nodes, making your automations dynamic and context-aware. Every time a node completes execution, its output becomes available as a variable that subsequent nodes can access and use.
**Think of variables as containers** that hold data as it flows through your workflow. Understanding how to access and manipulate them is key to building powerful automations.
## Accessing Variables
Langdock provides two intuitive ways to access variables from previous nodes in your workflow:
### Method 1: Double Curly Braces (`{{}}`)
The most direct way to reference variables is using the double curly brace syntax. Simply type `{{` in any field, and you'll see a dropdown of all available variables from previous nodes.
**Basic syntax:**
```handlebars theme={null}
{{node_name.output.field_name}}
```
**Real-world examples:**
```handlebars theme={null}
{{form1.output.email}}
{{analyze_feedback.output.sentiment}}
{{api_call.output.data.userId}}
{{trigger.output.customer_name}}
```
### Method 2: Output Selector
For fields that support it, you can use the visual output selector instead of typing variable paths manually. This is especially helpful when you're not sure of the exact data structure.
**How to use it:**
1. Click on a field that supports variable selection
2. Look for the variable picker icon or dropdown
3. Browse available outputs from previous nodes
4. Select the exact field you need
The output selector automatically generates the correct variable syntax for you, reducing errors and making configuration faster.
***
## Understanding Variable Structure
Variables follow a consistent structure that makes them predictable and easy to work with:
```handlebars theme={null}
{{node_name.output.property}}
```
Let's break this down:
* **`node_name`**: The unique name you gave the node (e.g., `form1`, `analyze_data`, `http_request`)
* **`output`**: The standard output object every node produces
* **`property`**: The specific data field you want to access
### Accessing Nested Data
Real-world data often has nested structures. You can access deeply nested properties using dot notation:
```handlebars theme={null}
{{node_name.output.user.profile.email}}
{{api_response.output.data.items[0].title}}
{{trigger.output.metadata.created_at}}
```
### Working with Arrays
When your data includes arrays, you can access specific elements by index:
```handlebars theme={null}
{{http_request.output.results[0].name}}
{{trigger.output.attachments[2].url}}
```
Or reference the entire array:
```handlebars theme={null}
{{trigger.output.tags}}
{{api_call.output.items}}
```
### Complex Objects
For structured data from agents or API responses:
```handlebars theme={null}
{{agent.output.structured.summary}}
{{agent.output.structured.priority}}
{{agent.output.structured.action_items[0]}}
```
***
## What Happens When You Rename Nodes
**Node names are tied to variables.** When you rename a node, all variables referencing that node are automatically updated throughout your workflow—no manual fixes needed.
### Automatic Variable Updates
Let's say you have a form trigger node named `form1` being used in multiple places:
```handlebars theme={null}
{{form1.output.email}}
{{form1.output.subject}}
{{form1.output.message}}
```
If you rename `form1` to `PMApplicantForm`, all references automatically update:
```handlebars theme={null}
{{PMApplicantForm.output.email}}
{{PMApplicantForm.output.subject}}
{{PMApplicantForm.output.message}}
```
**This happens automatically in:**
* Manual mode fields
* AI Prompt mode instructions
* Code node references
* Condition node comparisons
* All other node configurations
### Best Practice: Name Nodes Meaningfully
Since renaming is seamless, invest time in giving nodes clear, descriptive names from the start:
**Good node names:**
* `ExtractCustomerData`
* `AnalyzeSentiment`
* `SendWelcomeEmail`
* `CheckInventoryStatus`
**Avoid generic names:**
* ❌ `agent1`
* ❌ `http_node`
* ❌ `trigger`
* ❌ `action`
***
## Reusing Variables Across Multiple Nodes
One of the most powerful features of variables is that **you can use them multiple times across many different nodes**. Once a node produces output, that data is available to all subsequent nodes in your workflow.
### Basic Variable Reuse
Use the same variable in multiple nodes:
```text theme={null}
Trigger (form1) →
├─ Agent (analyze with {{form1.output.message}})
├─ HTTP Request (log {{form1.output.email}})
└─ Notification (alert about {{form1.output.priority}})
```
All three nodes can reference `form1.output` simultaneously since they all come after the trigger.
### Use Case: Multi-Channel Notifications
Send the same information through different channels:
```text theme={null}
Agent (analyze_ticket) →
├─ Email (send {{analyze_ticket.output.summary}} to support team)
├─ Slack (post {{analyze_ticket.output.summary}} to #support)
└─ Database (log {{analyze_ticket.output.priority}} and {{analyze_ticket.output.category}})
```
***
## Advanced Variable Techniques
### Combining Multiple Variables
Mix data from different nodes in a single field:
```handlebars theme={null}
New order #{{trigger.output.order_id}} from {{customer_data.output.name}} for {{trigger.output.amount}}
```
### Variables in Code Nodes
Access variables as standard objects in code nodes:
**JavaScript:**
```javascript theme={null}
const email = trigger.output.email;
const priority = analyze.output.structured.priority;
const score = calculate_score(email, priority);
return { score: score, email: email };
```
**Python:**
```python theme={null}
email = trigger["output"]["email"]
priority = analyze["output"]["structured"]["priority"]
score = calculate_score(email, priority)
return {"score": score, "email": email}
```
### Variables in AI Prompt Mode
Reference multiple variables in AI instructions:
```text theme={null}
Analyze the customer message {{trigger.output.message}} and consider their history:
- Previous purchases: {{customer_data.output.purchase_count}}
- Last contact: {{customer_data.output.last_contact_date}}
- Sentiment from last interaction: {{previous_analysis.output.sentiment}}
Provide a personalized response addressing their concern.
```
### Filtering and Transformation
Use variables to filter or transform data:
**In a Condition node:**
```handlebars theme={null}
{{trigger.output.amount}} > 1000
{{analyze.output.priority}} == "high"
{{customer.output.status}} != "inactive"
```
**In a Code node for filtering:**
```javascript theme={null}
const orders = trigger.output.orders;
const highValueOrders = orders.filter((order) => order.amount > 1000);
return { filtered_orders: highValueOrders };
```
***
## Troubleshooting Variables
### Variable Not Available
**Problem:** The variable you want doesn't appear in the autocomplete.
**Common causes:**
* The node hasn't been connected yet
* The node is downstream (comes after) the current node
* The node hasn't been executed in a test run yet
**Solution:** Ensure the node producing the variable comes before the node trying to use it in your workflow graph.
### Undefined or Null Values
**Problem:** Variable exists but returns `undefined` or `null`.
**Common causes:**
* The source node failed or returned empty data
* The field path is incorrect
* Optional data wasn't provided
**Solution:**
```javascript theme={null}
// Provide defaults in Code nodes
const email = trigger.output.email || "unknown@example.com";
const amount = trigger.output.amount || 0;
// Check existence first
if (trigger.output && trigger.output.email) {
// Safe to use
}
```
### Wrong Data Type
**Problem:** Variable contains unexpected data type.
**Solution:** Check the output tab of the source node after a test run to see the actual data structure.
```javascript theme={null}
// Debug by logging the variable
console.log(typeof trigger.output.amount);
console.log(JSON.stringify(trigger.output, null, 2));
```
## Quick Reference
### Variable Syntax Cheat Sheet
| Use Case | Syntax | Example |
| ----------------------- | ----------------------------------------------------------- | --------------------------------------- |
| Basic field access | `{{node.output.field}}` | `{{trigger.output.email}}` |
| Nested object | `{{node.output.object.property}}` | `{{user.output.profile.age}}` |
| Array element | `{{node.output.array[index]}}` | `{{items.output.list[0]}}` |
| Nested in array | `{{node.output.array[0].property}}` | `{{orders.output.items[0].price}}` |
| Entire array | `{{node.output.array}}` | `{{trigger.output.tags}}` |
| Agent structured output | `{{agent.output.structured.field}}` | `{{analyze.output.structured.summary}}` |
| Multiple in one string | `Order {{trigger.output.id}} for {{trigger.output.amount}}` | — |
***
## Best Practices
Name nodes clearly so variables are self-documenting: `{{AnalyzeCustomerFeedback.output.sentiment}}` is much clearer than `{{agent1.output.sentiment}}`
After adding a node, run a test and click on the node to inspect its output. This confirms the data structure before using it in downstream nodes.
Use default values for optional fields:
```javascript theme={null}
const priority = analyze.output.priority || "medium";
const tags = trigger.output.tags || [];
```
If you find yourself writing deeply nested paths like `{{node.output.data.items[0].meta.tags[2].value}}`, consider using a Code node to simplify the data structure first.
Add comments in Code nodes or descriptions in nodes when using complex variable logic, especially for team workflows.
***
## Next Steps
Now that you understand variables, explore how to use them effectively in different contexts:
Learn how to use variables in Auto, Manual, and AI Prompt modes
Transform and manipulate variables with custom code
Use variables to create dynamic routing logic
Understand how variables fit into the bigger picture
# Knowledge Folders
Source: https://docs.langdock.com/resources/integrations/knowledge-folders
Knowledge folders can contain up to 1,000 files and can be attached to an agent. Below is a guide on how to use this feature and the key differences compared to attaching files directly to the agent.
## Knowledge Folders
Knowledge folders let you work with up to 1,000 files in a single collection, extending beyond the 20-file limit for direct attachments. This guide covers setup, usage, and key differences from direct file attachments.
Knowledge folders use vector search to find relevant content sections, which enables large document collections but may not consider every document part for each response.
## Understanding the Context Window Limitation
**Direct attachments**: Up to 20 files sent directly to the model's context window
**Knowledge folders**: Up to 1,000 files with vector search selecting relevant sections
The 20-file limit exists because of model context windows (the maximum information models can process simultaneously).
For detailed comparisons between knowledge folders and direct attachments, see our [comprehensive guide](/resources/faq/knowledge-folders-and-direct-attachments).
When you attach a knowledge folder, Langdock's vector search identifies the most relevant document sections for your specific prompt, then sends only those sections to the model. This approach enables large document collections while working within context window constraints.
## Supported File Types
Knowledge folders support most document formats including PDF, DOCX, TXT, and Markdown files.
**Tabular Data Limitation**: XLSX and CSV files cannot be added to knowledge folders. For spreadsheet data analysis, use direct file attachments (up to 20 files) or convert your data to a supported text format.
## File Size Limits
Langdock's character limits for file uploads are determined by the underlying LLM providers' token restrictions. Since 1 token is approximately 4 characters, and the file upload limit for most major providers is 2 million tokens (roughly 8 million characters), Langdock also supports up to 8 million characters for file content such as PDFs.
## Setting Up Knowledge Folders
Navigate to [Integrations](https://app.langdock.com/integrations) in your Langdock workspace.
Choose your upload method:
* **Manual upload**: Select files directly from your computer
* **API integration**: Push files programmatically using our REST API
1. Open your target agent
2. Go to the **Knowledge** section
3. Click **Add Action**
4. Search for your knowledge folder name
5. Select **Add** for the `"Search in [folder name]"` action
## Sharing and Permissions
Knowledge folders support the same sharing capabilities as agents:
Share with specific users in your workspace
Share with defined user groups
Make available to all workspace members
Workspace admins can configure sharing permissions for different roles at [User Management > Roles](https://app.langdock.com/settings/workspace/user-management/roles).
## API Integration
**Rate limit**: 50 requests per minute
**Authentication**: API key required (generated by workspace admins)
**Documentation**: Complete schema available in our [API guides](/api-endpoints/knowledge-folder/sharing)
### API Key Management
**Workspace admins** can generate API keys directly in workspace settings. **Non-admin users** need to request keys from their workspace administrator.
# Model Guide
Source: https://docs.langdock.com/resources/models
One of our core-values is to build a tool which is model-agnostic. This means we do not want to restrict users to models from just one provider, but rather allow them to choose which model from which provider to use. Each model has different strengths and we encourage you to test the different models to find the best models for your specific need.
# Selecting a model
* Whenever you start a new chat, you can use the model you want to work with at the top left.
* You can still change the model at the top left if you have already started a chat. For example, you can start with GPT-4.1 and, after three messages, switch to Claude Sonnet 4.
* You can also set your personal default model in the account settings [here](https://app.langdock.com/settings/account/preferences). The default for new users is GPT-4.1.
# Selecting the right model
Below you find an overview of which models do exceptionally well at which use cases that you might encounter. Below, we also outline our [personal recommendations](#our-recommendations) based on our current user feedback.
| Model | Strengths | Knowledge Cut-off |
| ------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- |
| **GPT-5.1** OpenAI | Fastest GPT‑5.1 variant for conversation and everyday tasks. Warmer communication style, high language understanding at low latency. Ideal for chat, writing, and support. | Sep 30, 2024 |
| **GPT-5.1 Thinking** OpenAI | Deepest reasoning model from OpenAI, designed for complex analyses, multi-step planning, and coding tasks. Dynamically adjusts computation time based on complexity; excels through precise reasoning, structured explanations, and maximum transparency on challenging problems. | Sep 30, 2024 |
| **GPT-5.1 Thinking Fast** OpenAI | Optimised reasoning model with better comprehensibility and clarity compared to GPT-5 Thinking. Spends less time on simple tasks and more on complex ones; uses adaptive reasoning for efficient, precise arguments and structured explanations – ideal for fast, intelligent analyses. | Sep 30, 2024 |
| **GPT-5** OpenAI | OpenAI's latest flagship model with cutting-edge reasoning, advanced multimodal capabilities, and highest accuracy on complex tasks. Offers extended context capabilities, image understanding, and flexibility in response schemas, suited for enterprise applications. | Sep 30, 2024 |
| **GPT-5 Thinking Fast** OpenAI | Reasoning variant of the GPT-5 series with higher accuracy and efficient token usage compared to o3. Better performance in visual reasoning and scientific problem-solving, reduces hallucinations with fewer errors on factuality benchmarks. Delivers more precise long-form answers and more reliable facts, optimised for research, technology, and highly complex analyses. | Sep 30, 2024 |
| **GPT-5 mini** OpenAI | Faster, cost-efficient version of GPT-5 with near-flagship reasoning capabilities. Balances performance and speed, offering most GPT-5 features at a lower price – perfect for everyday professional tasks, routine content creation, and analysis. | May 31, 2024 |
| **GPT-5 nano** OpenAI | Lightweight variant that prioritises speed and throughput while maintaining solid reasoning capabilities for common use cases. Delivers reliable answers at the lowest cost in the GPT-5 family and is suited for high-volume or latency-critical applications. | May 31, 2024 |
| **o3** OpenAI | Advanced technical writing & instruction following with outstanding mathematics, science, and programming capabilities. Excels at multi-step reasoning over text and code. A balanced and powerful model that sets new standards for mathematics, science, programming, and visual reasoning. | Jun 01, 2024 |
| **Claude Opus 4.5** Anthropic | Anthropic's most intelligent and robustly aligned model, ideal for complex tasks, professional development, and advanced agents. Lower token consumption for reasoning, coding, and agent capabilities compared to Sonnet 4.5. Supports "effort control" for fine-tuned computation time and advanced context and memory management. | Mar 01, 2025 |
| **Claude Opus 4.5 Reasoning** Anthropic | Reasoning version of Claude Opus 4.5, designed for problems requiring deep, step-by-step analysis. Specifically optimised for logical reasoning, complex planning, and multi-step inference. Utilises Opus 4.5 with expanded "effort level" for maximum analytical quality while reducing token consumption. | Mar 01, 2025 |
| **Claude Sonnet 4.5** Anthropic | Highest intelligence across most tasks with exceptional agent and programming capabilities and authentic, realistic tone for creative writing. Anthropic's most advanced model that, according to multiple benchmarks, is comparable to GPT-5 models and even surpasses them in some areas. | Nov 01, 2024 |
| **Claude Sonnet 4** Anthropic | Excels at complex programming, creative writing, image analysis, and translation with a dual-mode system for fast responses and deep reasoning. Anthropic's top model that improves upon Claude 3.7 with larger context window, seamless mode switching, better programming, and deeper reasoning. Maintains strong security and alignment. | Nov 01, 2024 |
| **Gemini 2.5 Flash** Google | Excels at fast real-time content creation and robust image analysis, processes long documents and datasets with ease. Google's fastest Gemini Flash model with large context window and double output length. Delivers results nearly twice as fast as its predecessor on tasks with long documents and multimodal analysis. | Jan 01, 2025 |
| **Gemini 2.5 Pro** Google | Excels at fast real-time content and image analysis with ultra-long context for processing large documents. Google's flagship model with support for large contexts that performs well on complex tasks and code. Prioritises nuanced reasoning and depth over speed and surpasses earlier Gemini models in accuracy and analysis. | Jan 01, 2025 |
| **Mistral Large 2411** Mistral | Mistral's top model with outstanding reasoning capabilities and strengths in software development, programming, and multilingual abilities. Strong performance on demanding conversations and complex problem-solving. More refined than earlier versions with better instruction following. | Oct 01, 2023 |
# Our Recommendations
### Our Default for Everyday Tasks: GPT-5 (OpenAI)
GPT-5 from OpenAI is our top recommendation for a versatile standard model, excelling in a wide range of tasks with its exceptional multimodal capabilities. This flagship model is ideal for users who need a powerful all-rounder, offering excellent performance in content generation, creative writing, image analysis, and multilingual support. GPT-5 stands out for its state-of-the-art reasoning capabilities while maintaining strong creative writing skills.
### A Hybrid Model for Coding and Writing: Claude Sonnet 4.5 (Anthropic)
Claude Sonnet 4.5 from Anthropic is our top recommendation for coding or text generation. Many of our users prefer Claude for writing their software code, emails, texts, translations etc. As with the previous versions of Claude Sonnet, we recommend it for coding and creating text, since it has an authentic and realistic tone of voice. We added two models to give you the choice on whether you want to use reasoning. The models are Claude Sonnet 4.5 and Claude Sonnet 4.5 Reasoning. Both offer amazing instruction-following abilities, minimal hallucination, and exceptional coding capabilities. We recommend it for its consistently high user satisfaction in language-related and software development tasks.
### The Specialized Model for Complex Reasoning: GPT-5 Thinking (OpenAI)
GPT-5 Thinking from OpenAI is our top recommendation for complex analytical tasks requiring maximum precision. This specialized variant excels at mathematical, scientific, and programming challenges through its advanced reasoning architecture, which breaks down intricate problems with exceptional clarity.
The key technical advantage is its extended thinking capabilities combined with GPT-5's state-of-the-art intelligence, enabling comprehensive analyses without losing coherence. While this deeper reasoning adds modest latency, the accuracy gains make it ideal for professional and enterprise applications requiring the highest quality outputs.
# Image Models
Below you will find the currently available image models within Langdock.
| Model | Strengths |
| ------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Flux1.1 Pro Ultra** Black Forest Labs | The Black Forest Labs' flagship model, offering state-of-the-art performance image generation at blazing speeds with top of the line prompt following, visual quality. |
| **Flux.1 Kontext** Black Forest Labs | State-of the art in-context image generation and editing by Black Forest Labs. Use it to combine text and images for precise, coherent results. |
| **Imagen 4** Google | Google's leading text-to-image model, engineered for creativity. Imagen 4 can render diverse art styles with greater accuracy - from photo realism and impressionism to abstract and illustration. |
| **Imagen 4 Fast** Google | Google's image model which is optimised for rapid image generation and high-volume tasks. |
| **Gemini 2.5 Flash Image (Nano Banana)** Google | Google’s advanced AI image editing model, part of Gemini 2.5 Flash, that allows users to transform and edit photos using natural language prompts with fast, precise, and consistent results. Currently not available with exclusive EU-hosting. |
| **DALL-E 3** OpenAI | OpenAI's legacy text-to-image AI model that generates images from written prompts by combining language and visual understanding. |
| **GPT Image 1** OpenAI | GPT Image 1 is OpenAI's new state-of-the-art image generation model. It is a natively multimodal language model that accepts both text and image inputs, and produces image outputs. |
# Legacy models
The following models are still available and can be used. However, our recommendation is to use the newer versions of them.
> For context: the older model versions have limitations in performance and accuracy compared to their newer counterparts. The newer versions include significant improvements in response quality, speed, and safety features that we've developed based on user feedback and technical advances.
You can continue using the older versions if needed for compatibility reasons, but we'd recommend migrating to the newer versions when possible for the best experience.
| Model | Strengths | Knowledge Cut-off |
| :------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------- | :---------------- |
| **GPT-5 Chat** OpenAI | Conversationally fine-tuned GPT-5 model optimised for dialogue, customer support, and interactive assistants | Sep 30, 2024 |
| **GPT-4.1** OpenAI | Content Generation & Creative Writing, Strong Analytical Skills, high proficiency in logical reasoning and problem-solving | Jun 01, 2024 |
| **GPT-4.1 mini** OpenAI | Smaller, faster version of GPT-4.1. Great for everyday tasks with significantly faster responses. | Jun 01, 2024 |
| **GPT-4.1 nano** OpenAI | Fast and efficient content generation, quick interactive responses, and solid performance on everyday tasks | Jun 01, 2024 |
| **o4 mini** OpenAI | Excels at visual tasks, optimized for fast, effective reasoning. | Oct 01, 2023 |
| **GPT-4o** OpenAI | Content Generation & Creative Writing, Data Analysis (Excel & CSV files), Image Analysis, Translation and Multilingual Support | Oct 01, 2023 |
| **GPT-4o Mini** OpenAI | Speed and Efficiency, Image Analysis | Oct 01, 2023 |
| **o1** OpenAI | Excels at real-time reasoning, planning, content creation, coding, and coherence in extended conversations. | Oct 01, 2023 |
| **o3 Mini High** OpenAI | Enhanced accuracy and depth of response while still benefiting from an optimized, efficient architecture | Oct 01, 2023 |
| **LLaMA 3.3 70B** Meta | Speed and Efficiency, Multilingual Capabilities | Aug 01, 2023 |
| **DeepSeek R1 32B** DeepSeek | Software Development & Coding, Hallucination Resistance | Jan 01, 2025 |
| **Claude 3.7 Sonnet** Anthropic | Software Development & Coding, Content Generation & Creative Writing, Image Analysis, Translation and Multilingual Support | Nov 01, 2024 |
| **Gemini 2.0 Flash** Google | Long-Context Analysis & Document Processing, Image Analysis, Speed and Efficiency | Aug 01, 2024 |
| **Gemini 1.5 Pro** Google | Long-Context Analysis & Document Processing, Image Analysis | May 01, 2024 |
# Slack Bot (Setup)
Source: https://docs.langdock.com/settings/chatbots/slack
Install the Langdock App in Slack and use your models and Agents directly in Slack
You can reach out to [support@langdock.com](mailto:support@langdock.com) to
set up your integration with assistance.
## Prerequisites
* Access to an admin account in your Langdock workspace.
* Permission to install apps in your Slack workspace to follow this guide.
Looking for a guide on how to interact with the Slack Bot once it's already
installed? Check out the [Langdock Slack Bot User
Guide](/resources/chatbots/slack).
***
1. Setup the Slack bot in your [workspace integration settings](https://app.langdock.com/settings/workspace/products/integrations)
2. Allow the Langdock app to read and write messages in your workspace by clicking "Allow".
3. Done! Now all users in your workspace can use the Langdock app in your Slack workspace by tagging **@Langdock**. For a detailed guide on how to use the Langdock app in Slack, please refer to the [Langdock Slack Bot User Guide](/resources/chatbots/slack#).
# Teams Bot (Setup)
Source: https://docs.langdock.com/settings/chatbots/teams-bot
Install the Langdock App in Microsoft Teams and use your models and Agents directly in Teams
## Prerequisites
This guide is for **Langdock administrators**. If you're an user looking to set up the Teams Bot, share this page with your workspace admin.
In addition to Langdock admin rights, you'll also need:
1. Admin access to the [Teams Admin Center](https://admin.teams.microsoft.com)
2. Azure admin rights
Only members of a Langdock workspace can chat with the Teams Bot. Make sure all intended users are added to your Langdock workspace before they try to use the bot in Teams.
### Step 1: Locate Langdock Application
In the [Teams Admin Center](https://admin.teams.microsoft.com), navigate to **Teams apps** > **Manage apps** and search for "Langdock".
### Step 2: Configure App Availability Settings
1. Click on the Langdock application to open its settings
2. In the **Users and groups** tab, click **Edit availability** to configure who can access the app (organization-wide, specific groups, or blocked)
### Step 3: Review App Permissions
1. Navigate to the **Permissions** tab
2. Have your Azure admin consent to the required permissions
Teams admins are not necessarily Azure admins. Make sure the person with Azure admin rights completes this step.
### Step 4: Configure App Setup Policies
1. Navigate to **Teams apps** > **Setup policies**
2. Select **Global (Org-wide default)** (or your preferred policy)
3. In the **Installed apps** section, click **Add apps**
**Pro tip**: Enable **User pinning** so users can pin the Langdock app to their Teams sidebar for quick access.
### Step 5: Add Your App to the Setup Policy
1. Search for "langdock" and select the application
2. Click **Add**
3. (Optional) Add Langdock to **Pinned apps** for easy access
4. Click **Save**
### Step 6: Connect Your Langdock Workspace
1. Go to your [workspace integration settings](https://app.langdock.com/settings/workspace/products/integrations) in Langdock
2. Find the Microsoft Teams Bot section and enable the integration
3. Repeat this step for any additional Langdock workspaces you want to make available in Teams
You need admin access to your Langdock workspace to complete this step. Each workspace that should be available in the Teams Bot must be connected separately.
Done! Now all users in your workspace can use the Langdock app in private Chat or in your Teams Channel by tagging **@Langdock**. For a detailed guide on how to use the Langdock app in Teams, please refer to the [Langdock Teams Bot User Guide](/resources/chatbots/teams-bot).
***
*For additional support or questions about the Teams Bot setup, please contact your Langdock administrator or reach out to [support@langdock.com](mailto:support@langdock.com).*
# Fair Usage Policy
Source: https://docs.langdock.com/settings/fair-usage-policy
Langdock has limits of prompts per user per time frame to ensure that all users can use all models at any time.
## Reasoning
Our Fair Usage Policy limits prompts per time frame to allow all users access to all models without a single user negatively impacting the service for others by monopolizing model capacities. It is designed to prevent abuse and ensure reliability and availability for the overall user base.
## Implementation
If a user exceeds the allowed limits, they can switch to a different model. However, most of all users never hit this limit, and the limit is only temporary.
As LLMs have different prices and different demands, we structured them into different categories:
* Category 1 includes smaller, faster models. They allow unlimited requests, although there is still spam protection against abuse.
* Category 2 models can receive 200 messages in three hours.
* Category 3 models allow 100 messages in three hours.
## Overview
**Category 1: Unlimited (with spam protection)**
* Ada v2
* Azure Document Intelligence
* DALL-E 3
* FLUX1.1 \[pro] Ultra
* Gemini 2.5 Flash Image 🍌
* Imagen 4
* Imagen 4 Fast
* Langdock Built-in OCR
* Mistral OCR
**Category 2: 250 messages / 3 hours**
* Claude 3.5 Haiku
* Codestral
* Gemini 2.5 Flash
* GPT oss (120b)
* GPT-4.1
* GPT-4.1 mini
* GPT-4.1 nano
* GPT-4o
* GPT-4o Mini
* GPT-5
* GPT-5 Mini
* GPT-5 Nano
* GPT-5 Thinking
* Llama 3.3 70B
* Llama 4 Maverick
* Mistral Large 2411
* Mistral Medium
* Nova Lite
* Nova Pro
**Category 3: 100 messages / 3 hours**
* Claude Sonnet 3.5
* Claude Sonnet 3.7
* Claude Sonnet 3.7 Reasoning
* Claude Sonnet 4
* Claude Sonnet 4.5
* Claude Sonnet 4.5 Reasoning
* Claude Sonnet 4 Reasoning
* DeepSeek r1
* DeepSeek v3
* DeepSeek v3.1
* Gemini 2.5 Pro
* Gemini 2.5 Pro Reasoning
* GPT-5 Chat
* o1
* o3
* o3 Mini
* o3 Mini high
* o4 Mini
**Category 4: 30 messages / 3 hours**
* Claude Opus 4.5
* Claude Opus 4.5 Reasoning
**Category 5: 20 messages / 3 hours**
* Claude Opus 3
* GPT Image 1
* o3 Pro
**Category 6: 5 messages / 3 hours**
* FLUX.1 Kontext
# Adding your own models
Source: https://docs.langdock.com/settings/models/adding-models
In the model settings, admins can add their own AI models.
To add your own models, we have prepared the following guides for you. If you have any questions, [contact the Langdock team](mailto:support@langdock.com).
## Adding models
### Open model dialogue
1. Go to the [model settings](https://app.langdock.com/settings/workspace/models) and click on **Add Model** to add a new model to the platform
2. A modal opens where you can add models. Here, you find two sections:
* **Display Settings** at the top allows you to customize what the user sees in the model selector.
* **Model Configuration** lets you connect your Langdock workspace to your model API.
### Display Settings
To configure the **Display settings**, you can follow the following steps. This information is also available by the company hosting the model.
**Provider:** The organization that built and trained the model. This doesn't necessarily align with the company you consume the model from. For example, you can use Microsoft Azure to use OpenAI models in the EU, but the provider will still be OpenAI.
**Model name:** The name of the model.
**Hosting provider:** Where you consume the model. For example, GPT-4.1 can be hosted by Microsoft Azure.
**Region:** Shows the user where the model is hosted. This can be set to the US or the EU.
**Ranking:** To give users an indication of how the model performs speed- and quality-wise, you can add a ranking from 1 to 5. Smaller models, like GPT-4.1 mini, GPT-5 mini or LLaMA 3.3 70B, are faster but don't have the highest quality. The top models, GPT-5 or Claude Sonnet 4.5, have high output quality.
**Knowledge cutoff:** When the model training data ended. Most models have a knowledge cutoff from mid-2024 to early 2025.
**Image analysis:** Indicates whether the model can analyze images. This information is available from the model provider and the model hoster. Please only enable this setting if the model supports vision/image analysis. Models that allow image analysis are GPT-5, GPT-4.1, Claude, and Gemini models.
### Model Configurations
To set up the Model Configuration, select the SDK you are using. You will find information on the configuration of the model provider (e.g., Azure or AWS):
**SDK:** The kit or library Langdock needs to use the model you added.
**Base URL:** To send prompts to the corresponding endpoint of your model.
**Model ID:** The name of the model in your configuration (this might not be the "official" model name, like GPT-4o).
**API key:** Allows your users to authenticate using the model from within Langdock when they send prompts.
**Context Size:** The number of tokens the model can process in its context window. Please use the exact value of the model to ensure the context management in Langdock works correctly.
### Other configuration options
**Maximum messages in 3 hours:** Allows you to influence usage/costs and limit messages per user. This setting is optional.
**Input and output token pricing:** Allows you to set the token pricing of the individual model to monitor usage and costs.
**Reasoning Effort:** Determines how much computation the model spends on reasoning. Higher values improve quality but incur extra latency and tokens. Accepted values: Minimal, Low, Medium, High. (Only for GPT-5 models.)
**Verbosity:** Controls the level of detail in the model's final answer. Accepted values: Low, Medium, High. (Only for GPT-5 models.)
**Visible to everyone:** You can set the model to be visible to everyone in the workspace. If this option is disabled, the model is only visible to admins and cannot be used by other users. This allows you to test the model before launching it to the entire workspace.
**Maintenance mode:** Can be activated to show users in the interface that the model might not work as expected. This is useful if you are changing some configuration or there is a temporary issue with the model from your model provider.
### Final steps
1. After entering all mandatory settings, click **Save**
2. We recommend testing the model before making it visible to everyone. Send a message to the model and see if there is a response generated by the model. If you run into any issues, contact [support@langdock.com](mailto:support@langdock.com)
## Special cases during setup
**Mistral from Azure:** Make sure to select "Mistral" as the SDK.
**Claude from AWS Bedrock:** The Base URL needs to contain the "access key" / "Zugriffsschlüssel".
**Flux from Replicate:** The base URL field needs to have the full model path, not just the base URL. For Flux 1.1 Pro this is: `https://api.replicate.com/v1/models/black-forest-labs/flux-1.1-pro/predictions`
# Bring your own Keys (BYOK)
Source: https://docs.langdock.com/settings/models/byok
By default, users use API keys from Langdock. You can optionally use your own API keys instead.
Whenever you submit a prompt and send it to the model, an answer is generated and sent back to you. Costs occur for the underlying AI model for this answer generation.
## Options for LLM costs
There are two options for paying these costs to the model provider (e.g., Microsoft Azure for GPT models):
### Option 1: Flat fee for LLM costs
* You use Langdock's API keys from Microsoft, for example
* All usage is billed through Langdock
* Langdock offers all users in the workspace access to all models at a flat fee
* This flat fee currently costs €5 per user per month
### Option 2: Bring your own keys (BYOK)
* You bring your own API keys from the model provider (for example, Microsoft)
* For Langdock, only the licensing fee for the platform is paid
* All model/usage-related costs are directly between you and the model provider
Option 1 is the "all-inclusive" version of Langdock, where you don't have to set up and manage keys on your side (getting keys for the models, requesting quota, keeping models updated, etc.). Option 2 tends to be a bit cheaper overall.
**Note:** We offer the use of our API keys cheaply as we don't want to incentivize ourselves to make money with LLM API arbitrage and to keep us focused on building a great application layer on top of LLMs.
## Set up of BYOK
To use BYOK and not pay the flat fee for LLM costs, BYOK needs to be manually activated for your workspace by the Langdock team. Otherwise, your workspace still uses the models from Langdock in the background (for embeddings and image generation).
[Here](/settings/models/byok-setup) is a guide of how to set up BYOK.
# BYOK setup
Source: https://docs.langdock.com/settings/models/byok-setup
To use your own models and not the flat fee of Langdock, BYOK needs to be activated. This section guides you through the process of adding your own models.
## 1. Set up the models and the keys in Langdock
You need different models for the platform to work. To add models, you need to add the models and the according keys [here](https://app.langdock.com/settings/workspace/models) in the workspace settings. The keys can be used for multiple models from the same provider, so for example GPT-5 and GPT-5.1 can use the same key if they are both from the same deployment in e.g., Microsoft Azure.
Here are the models necessary to cover all functionalities:
### 1.1 Embedding Model
* Embedding models process documents and allow the model to search uploaded documents
* We currently require the provision of ADA v2 (text-embedding-ada-002)
### 1.2 Backbone Model
* The backbone model has three purposes:
* It generates chat titles in the sidebar on the left (to generate a summary in 3 words, you do not need the main model)
* It defines and executes planning steps of models that are not efficient in tool calling (e.g., LLaMA or DeepSeek)
* If the main model fails, the backbone model jumps in to finish a response for the user.
* We recommend GPT-5 mini (gpt-5-mini) for this purpose.
Important: The backbone model is a separate model you need to set up. If you already added one GPT-5 mini model, please set up another one. This model will be set as a backbone model afterward.
### 1.3 Image Generation Model
* We support dall-e-3, gpt-image-1, Google Imagen, and Flux models from Black Forest Labs.
* For Google Imagen 3, follow the same setup process as [Gemini](/settings/models/gemini) using model ID `imagen-3.0-generate-001`
### 1.4 Completion Models
* For users to select different models in the chat, you can add the completion models for your users like GPT-5, GPT-5.1, o3, Claude Sonnet 4.5, Gemini 2.5 Flash etc.
* Please also add the models needed for Deep Research (o3, o4 mini and GPT-5 mini). They do not need 2 deployments like the backbone model.
* We support all models hosted by Microsoft Azure, AWS, Google Vertex and OpenAI
* For quotas, anything between 200k and 500k should be good to cover usage of \~200 users. For GPT-5.1, the most-used model, you might need a quota of 500k to 1 million tokens.
For the main models, we recommend setting up multiple deployments in different regions. If a model has an error in one region, Langdock automatically retries to call the model in a different region.
**Checklist:**
Now, you should have set up the following custom models:
* 1x Embedding model (Ada)
* 2x GPT-5 mini (one as a completion model and one as a backbone model)
* 1 or more image generation models
* 1x o3
* 1x o4 mini
* Current major models from OpenAI, Anthropic and Google (and others) as Completion models
## 2. Reach out to the Langdock team
After you have set up all the models you need, reach out to the Langdock team. We will align with you on a timeslot to turn on BYOK on our side.
Usually, this should be done in the late afternoon or evening when fewer users are active. There should not be any downtime; this is a precautionary step to ensure no disruptions during the switch.
Please ensure that you or someone who can set up the models is available. We will check that an engineer is also available on our side.
## 3. Test the models
Please make sure that all of the models work correctly. Here is how you can test the models:
* **Completion models:** Send a prompt to each model you can select in the interface (e.g., "write a story about dogs").
* **Embedding model:** Upload a file and ask a question about it (e.g., "what is in the file"). The upload should work and you should receive an answer based on the file.
* **Image model:** Ask any model to generate an image. You should see an image generated by the model in the background.
* **Backbone model:** Write a message in a new chat and check whether a chat title is generated after sending the prompt. (Please ensure that strict mode is disabled for this model)
Please contact the Langdock team if there are any issues here.
# Recommended Models
Source: https://docs.langdock.com/settings/models/recommended-models
The following list contains the models we currently recommend for BYOK workspaces and the setup we use in our cloud (for non-BYOK workspaces).
This overview is only relevant for "Bring-your-own-key" (BYOK) customers of Langdock, who bring their own API Keys.
In [this table](https://docs.google.com/spreadsheets/d/1AFf7sHiSlMLF7mR0H9UPEqXbaRIrZCIjNMlEQCQou-w/edit?usp=sharing) you can find the recommended configuration for different models. Please reach out to us if you have any questions.
The recommended models we use in our cloud for non-BYOK customers are highlighted in grey. We recommend adding at least the top models of the leading providers:
* **GPT-5.2** - Latest flagship model with enhanced reasoning capabilities
* **GPT-5** - Powerful flagship model with advanced multimodal capabilities
* **GPT-5 mini** - Lightweight version optimized for speed and cost efficiency
* **o4 mini** - Specialized reasoning model for complex problem-solving
* **o3** - Advanced reasoning model with enhanced analytical capabilities
* **Claude 4 Sonnet** - Balanced model combining intelligence with speed
* **Claude 4 Sonnet Reasoning** - Enhanced version with improved logical reasoning
* **Gemini 2.5 Pro** - Google's flagship model with advanced multimodal capabilities
* **Gemini 2.5 Flash** - Fast, efficient model optimized for real-time applications
# API Key Best Practices
Source: https://docs.langdock.com/administration/api-key-best-practices
Keep your Langdock API keys safe and secure with these best practices for key management.
API keys are sensitive credentials that provide access to your Langdock account and resources. Protecting them is essential to maintain the security of your applications and data. This guide outlines best practices for managing your Langdock API keys safely.
## Why API Key Security Matters
Your Langdock API keys grant access to your account's AI capabilities and data. If compromised, unauthorized users could:
* Access your Langdock resources and incur unexpected costs
* Expose sensitive data processed through your applications
* Abuse your account for malicious purposes
* Violate your organization's compliance requirements
## Best Practices for API Key Management
### Never Hardcode API Keys
Don't do this:
```python theme={null}
from openai import OpenAI
client = OpenAI(
base_url="https://api.langdock.com/openai/eu/v1",
api_key="your-api-key-here"
)
```
Hardcoding API keys in your source code exposes them to anyone with access to your codebase, including version control history.
### Use Environment Variables
Store your API keys in environment variables rather than in your code. This separates configuration from code and makes it easier to manage different keys across environments.
Do this instead:
**1. Create a `.env` file in your project directory and add your API key:**
```bash theme={null}
LANGDOCK_API_KEY=your-api-key-here
```
**2. Install the python-dotenv package:**
```bash theme={null}
pip install python-dotenv
```
**3. Load your API key into your Python script:**
```python theme={null}
from dotenv import load_dotenv
from openai import OpenAI
import os
load_dotenv()
client = OpenAI(
base_url="https://api.langdock.com/openai/eu/v1",
api_key=os.environ.get("LANGDOCK_API_KEY")
)
```
### Keep Keys Out of Version Control
Add files containing sensitive credentials to your `.gitignore` file to prevent accidentally committing them:
```bash theme={null}
# .gitignore
.env
.env.local
config/secrets.yml
credentials.json
```
### Use Different Keys for Different Use Cases
Create separate API keys for different applications, environments, or teams. This practice:
* Limits the impact if a key is compromised
* Helps track usage by application or team
* Makes key rotation easier
* Provides better audit trails
For example, use separate keys for:
* Development vs. production environments
* Different applications using the Langdock API
* Different teams or departments in your organization
### Never Expose API Keys in Browser Requests
**Important:** Langdock does not support browser-based API requests. The Langdock API is designed exclusively for server-to-server communication. Attempting to make direct API calls from a browser will result in CORS (Cross-Origin Resource Sharing) errors.
API keys should never be exposed in client-side code because they would be:
* Visible in browser network traffic
* Accessible through browser developer tools
* Extractable from JavaScript source code
* Exposed to any user of your application
Your backend server should securely store the API key using the best practices described above and make requests to Langdock on behalf of your users.
### Implement Key Rotation
Regularly rotate your API keys to minimize the risk of long-term exposure:
1. Generate a new API key in your Langdock dashboard
2. Update your applications to use the new key
3. Monitor to ensure the transition is successful
4. Revoke the old key after confirming the new one works
We recommend rotating keys at least every 90 days, or immediately if you suspect compromise.
### Monitor Usage and Set Limits
Regularly review your API usage in the Langdock dashboard to detect any unusual patterns that might indicate a compromised key. Set up usage alerts and spending limits where possible to protect against unexpected charges from leaked keys.
## What to Do If Your API Key Is Compromised
If you suspect your API key has been exposed:
1. **Immediately revoke the key** in your Langdock dashboard
2. **Generate a new key** with appropriate permissions
3. **Update your applications** to use the new key
4. **Review your account activity** for any unauthorized usage
5. **Contact Langdock support** if you notice suspicious activity
6. **Document the incident** for your security records
## Need Help?
If you have questions about API key security or need assistance with your Langdock account:
* Contact our support team at [support@langdock.com](mailto:support@langdock.com)
* Review our Terms of Service and Privacy Policy for additional information
Remember: API key security is an ongoing practice, not a one-time setup. Regular review and updates to your security measures will help keep your Langdock account and applications safe.
# API Usage Export
Source: https://docs.langdock.com/administration/api-usage-export
Export detailed API key usage data to CSV format for cost analysis, monitoring, and billing reconciliation.
API usage exports are available to workspace administrators and provide detailed cost breakdowns for each API key over your selected time period.
## Accessing API Usage Export
Navigate to your API settings page to view analytics and export detailed usage data for your API keys.
Go to **Settings > API** in your workspace settings
Review the API key analytics displayed on the page
Choose your desired time frame from the timeframe selector (default: last 30 days)
Click the **Export** button to download a CSV file with detailed API usage data
## Time Period Selection
The export includes all API calls made within the selected timeframe:
* **Default period**: Last 30 days
* **Custom ranges**: Select any timeframe using the timeframe selector
* Data is filtered based on the date/timestamp of each API call
The same timeframe selection applies to both the analytics visualization and the CSV export.
## Export Data Structure
The CSV export contains one row per API call with detailed cost and performance information.
### Column Definitions
| Column | Description |
| --------------------------- | ------------------------------------------------------------------------- |
| `date` | Date and timestamp of the API call (UTC) |
| `api_key_id` | Unique identifier of the API key used |
| `api_key_name` | Human-readable name of the API key |
| `provider` | AI provider (OpenAI, Anthropic, Google, DeepSeek, Meta, Amazon, Mistral) |
| `model` | Specific model name (e.g., GPT-4, Claude-3.5-Sonnet, Gemini-Pro, Llama-3) |
| `input_tokens` | Number of tokens in the request prompt |
| `output_tokens` | Number of tokens in the model's response |
| `completion_time_ms` | Time taken to complete the request in milliseconds |
| `input_token_price_per_1m` | Cost per 1 million input tokens in USD |
| `output_token_price_per_1m` | Cost per 1 million output tokens in USD |
## Calculating Costs
To calculate the cost of an individual API call, use the following formula:
```
Total Cost = (input_tokens × input_token_price_per_1m / 1,000,000) +
(output_tokens × output_token_price_per_1m / 1,000,000)
```
### Example Calculation
For an API call with:
* 500 input tokens
* 1,200 output tokens
* Input price: \$2.50 per 1M tokens
* Output price: \$10.00 per 1M tokens
```
Input Cost = 500 × $2.50 / 1,000,000 = $0.00125
Output Cost = 1,200 × $10.00 / 1,000,000 = $0.01200
Total Cost = $0.01325
```
To calculate the total cost for an API key, sum the individual costs across all rows for that key in your spreadsheet application.
## Data Privacy
* API key IDs and names are included for administrative visibility
* No end-user personal information is included in API usage exports
* Data reflects actual API calls made through your workspace
# Finding Invoices
Source: https://docs.langdock.com/administration/finding-invoices
Access and download your billing invoices directly from the Langdock Platform. All invoices are automatically generated and available immediately after payment processing.
## Accessing Your Invoices
From your workspace, click on **Workspace Settings** in the left sidebar, then select **Billing** from the settings menu.
Click on the **Manage Payment & Invoices** button where you'll find link to Stripe to see your Subscription Dashboard.
Once you clicked on the button you'll see the details of your subscription. Here you can manage your **Payment Method** or update your **Billing Information**.
If you choose to leave Langdock you can also cancel your subscription here.
Scroll down to the **Invoice History** section and click on any invoice you want to download. The invoices are sorted from newest to oldest.
## Invoice Information
Each invoice includes:
* Invoice number and date
* Billing period covered
* Payment method used
* Total amount charged
* Seat count for the billing period
* Feature usage details
* Any applicable discounts
* Tax information by region
## Need Help?
Can't find a specific invoice? Use the date filter in the Billing History section to narrow down your search by month or year.
If you need invoices sent to a different email address or require additional billing documentation, reach out to our support team at [support@langdock.com](mailto:support@langdock.com) with your workspace details.
# Identifying Use Cases
Source: https://docs.langdock.com/administration/identify-use-cases
This guide helps you understand what great use cases are and how to identify them for your organization.
## What are great use cases of AI?
Great use cases are situations and prompts/agents that **increase quality** of your work or your product and/or **reduce effort and time** to get to a result.
We recommend starting with **horizontal use cases** that are relevant for many people across all teams, ideally to everyone. This approach has several advantages:
* Everyone understands the problem and can relate to the situation. This increases willingness to learn how to build use cases and use AI in daily work.
* Horizontal use cases require less customization. When trying to cover deep vertical use cases, many integrations and custom steps are often needed, which increases effort.
* Deeper use cases are more difficult to build and maintain. In the beginning, collective AI knowledge isn't as deep yet, so it makes sense to focus on educating users with simpler cases first before diving into more complex use cases.
The email agent, document summarizer, or translator may not seem as exciting as a fully automated CRM agent. But these use cases are relevant in almost any organization and already help users significantly in their daily work.
## How to find use cases
### 1. Experiment and understand AI capabilities
AI excels at performing specific tasks across different areas. Initially, let users experiment and learn about AI's different capabilities. Pair this experimentation with showing example use cases and helping users organically develop their own use cases. Here are general AI capabilities:
| Text | Images | Audio (coming soon) | Data Analysis |
| ---------------- | ------------ | ------------------- | --------------------------------- |
| Write | Create | Transcribe | Extract data |
| Summarize | Analyze | Speak | Perform analyses and calculations |
| Analyze | Describe | | Identify patterns |
| Answer questions | Extract text | | Create tables and diagrams |
### 2. List daily activities
After understanding how AI generally works, ask users to list 5 activities that are repetitive and time-consuming.
### 3. Collect activities
Collect activities from the entire group and cluster similar activities. If several people have the same or similar use case, it might make sense to exchange experiences or work together on them.
You can use a whiteboard or digital whiteboard (e.g., Miro, Mural, Figjam) for this activity.
### 4. Connect use cases with AI capabilities
Understand which use cases work with which AI capability from above. For example, a translation use case would require writing text, while a document summarizer needs text writing and text summarization capabilities.
### 5. Prioritize what to work on first
Every organization has hundreds of use cases where AI can help. Trying to start with all use cases at once often overwhelms users, and in the end, no use case is properly covered. The key is to focus on a few and build them step by step.
In our experience, it makes sense to start with use cases that require little effort to build and have high impact for many people in the organization.
You can use a 2x2 matrix to prioritize use cases. The different axes are feasibility and impact.
**Feasibility can be evaluated by:**
* **Effort** - Lower effort makes it more feasible
* **Data and attachments readiness** - If data needs to be cleaned up or collected first, additional time is needed and feasibility is reduced
* **APIs, integrations, or code needed** - Many use cases can be covered by uploading a file from your computer or using Langdock integrations. While Langdock offers APIs, actions, and customization options, this increases effort
**Impact can be evaluated by:**
* **Time saved** - How many hours can be saved per week/month for how many employees?
* **Quality gains** - How much better is the output quality? Are errors reduced?
* **Customer satisfaction** - Does this use case improve service quality or speed?
* **Financial impact** - Is there potential to save costs or increase revenue?
After prioritizing use cases, start with high-impact use cases that require little effort. Afterward, work on high-impact use cases that require more effort. Postpone or skip low-impact tasks (there are probably many more use cases with high impact).
At this point, each user has one use case to work on that helps them in their daily work.
### 6. Document and execute
After finding and prioritizing use cases to work on, document your findings. Create a table listing all use cases, what AI capabilities they utilize, how much effort and impact they have, who owns them, and what the next steps are.
### 7. Build use cases in groups and individually
Now it's time to build the use cases. You can build a few use cases in a group together so people get a feeling for how it works. Afterward, everyone has time to experiment and build their use cases individually or in smaller groups. A good timeframe for enough experimentation without losing momentum is 1-2 weeks.
In the meantime, you can follow up with users individually to see if they're stuck or need help.
In the next group session, you can share how different use cases were built, what users learned, what worked, and what didn't work. Keep in mind that not everything works immediately, and some use cases aren't ideal for AI to perform. This is normal and part of the learning journey.
The Langdock team is also available to support you here. Just reach out to your point of contact to discuss how we can help.
# Invoices
Source: https://docs.langdock.com/administration/invoices
Our invoices are created automatically in Stripe, our billing portal. At the beginning of each month, you receive an invoice with the number of seats for the upcoming month. This calculation is corrected in the following invoice if you add or remove members from your Langdock workspace.
## How do our invoices work?
### Example invoice
| **Description** | **Qty** | **Unit price** | **Amount** |
| ------------------------------------------------- | ------- | ------------------------------ | ----------- |
| 16 Aug 2024 - 1 Sep 2024 | | | |
| Remaining time for 7 x Langdock Team after 16 Aug | 7 | | €67.74 |
| Unused time for Langdock team after 16 Aug | 6 | | -€58.06 |
| Remaining time for 8x Langdock Team after 24 Aug | 9 | | €40.65 |
| Unused time for Langdock team after 24 Aug | 7 | | -€31.61 |
| 1 Sep 2024 - 1 Oct 2024 | 9 | €20.00 | €180.00 |
| | | | |
| | | Subtotal | €198.72 |
| | | Total excl. VAT | €198.72 |
| | | VAT - Germany (19% on €198.72) | €37.76 |
| | | **Total** | **€236.48** |
### Our invoices consist of two parts:
**Section 1** (16 Aug - 1 Sep) corrects the price for the previous month if users were added or removed from the workspace.
**Section 2** (1 Sep - 1 Oct) calculates the price for the upcoming month based on the current number of users in the workspace.
### Section 2: Upcoming month
The upcoming month section is calculated as:
**Number of users × price per seat**
For example: 9 users × €20 = €180
### Section 1: Previous month corrections
The correction section accounts for changes made during the previous billing period.
In this example, the previous invoice billed €120 for 6 workspace users. This invoice corrects that amount because 3 users were added:
**First addition (16 Aug):**
* One user was added on 16 Aug, bringing the total to 7 users
* For the remaining 15 days, 7 licenses cost €67.74
* The unused time deducts the amount already paid for 6 licenses (€58.06)
* Net adjustment: €67.74 - €58.06 = €**9.68**
**Second addition (24 Aug):**
* Two more users were added on 24 Aug, bringing the total to 9 users
* For the remaining 8 days, 9 licenses cost €40.65
* The unused time deducts the amount already paid for 7 licenses (€31.61)
* Net adjustment: €40.65 - €31.61 = €**9.04**
### Total calculation
**Total amount = Previous month corrections + Upcoming month**
€198.72 = \[€9.68 + €9.04] + €180.00
This ensures you pay exactly for the licenses used during each period, with automatic adjustments for any changes to your team size.
# Useful Links
Source: https://docs.langdock.com/administration/legal-compliance
Below you'll find all key links for legal and compliance topics, especially those related to security and privacy.
| Description | Link |
| --------------------------------------------------------------------- | ------------------------------------------------------------ |
| Auftragsverarbeitungsvertrag (AVV) / Data Processing Agreement (DPA) | [AVV/DPA](https://trust.langdock.com/) |
| Datenschutzerklärung / Privacy Policy | [Privacy Policy](https://www.langdock.com/de/privacy-policy) |
| Nutzungsbedingungen / Terms of use | [Terms of use](https://www.langdock.com/de/terms) |
| Status page | [Status Page](https://status.langdock.com/) |
| Trust Center (including Certificates for SOC 2 Type II and ISO 27001) | [Trust Center](https://trust.langdock.com/) |
The Data Processing Agreement (DPA) is part of our Terms of Service and automatically applies once you start using Langdock.
If you have further questions or need documentation not listed here, feel free to reach out to our team at [support@langdock.com](mailto:support@langdock.com).
# Permission Recommendations
Source: https://docs.langdock.com/administration/permissions
This overview shows our recommended permissions.
Langdock customers usually have three different types of users.
* Admins: Managing the workspace customization (branding, custom links in the sidebar,...) and users
* Editors: Users with special permissions to support admins in rolling out Langdock. They are responsible to educate specific teams and help them build use cases.
* Users: Users who use Langdock, create agents, prompts etc.
In our experience, we recommend all users to create agents, upload documents and attach integration folders to allow everyone to build scalable use cases. Editors take a role of "super users" to keep the workspace clean and help moderate the groups, shared agents, knowledge folders and prompts.
You can see an overview of this setup here:
# Pricing
Source: https://docs.langdock.com/administration/pricing
This section explains the different pricing models for Langdock features.
You can find our current pricing at [here](https://www.langdock.com/pricing). Our pricing is built on two models:
### Seat-based pricing
The foundation of our pricing is a seat-based model for users in your Langdock workspace. Each user in your workspace counts as one seat, regardless of their usage level.
### Usage-based pricing
For features where usage and costs don't necessarily correlate with the number of users, we use usage-based pricing. This applies to features like API requests in our API product.
### Volume discounts
Our pricing is reduced in steps for accounts above 50 users, with discounts scaling based on your total number of users.
### Further features
Specific options are only available in the enterprise tier for organizations with more than 1000 users:
* Custom onboarding
* Dedicated support
* Dedicated deployment (on-premise and cloud)
If you have questions about pricing, want to use our rollout support, or need a custom quote, reach out to [support@langdock.com](mailto:support@langdock.com).
# Rollout Playbook
Source: https://docs.langdock.com/administration/rollout-playbook
We have built this playbook based on successful AI rollouts with our customers. We're happy to tailor it to your individual needs and discuss your specific rollout plan.
## Rollout Process
### 0. Planning the rollout
After securing leadership buy-in, the AI owner and team plan the rollout. Before starting, create a rough plan of measures and initiatives to educate users.
**We recommend preparing these items:**
* Add your logo, security hints, and custom links to the platform
* Set up SSO
* Fill the prompt library and agent library with suitable use cases
* Set up a shared channel with your users and the Langdock team
* Plan upcoming meetings and invite your users
* Choose pilot participants
### 1. Exploration and first use cases
Now you can onboard users to the platform. This moment creates momentum and excitement that you should use to organically adopt the tool and find use cases.
**Week 1: Kickoff (45-60 min)**
* Initial meeting to get to know the Langdock team and platform
* Q\&A and understand next steps
* First 1-2 company-specific use cases
* Users are added to a shared Slack/Teams channel
**Week 3: Deep Dive - Prompt Engineering (45-60 min)**
* Input from the Langdock team about prompt engineering
* 1-2 company-specific use cases from users
* Q\&A and sharing learnings
**Week 5: Deep Dive - How to build an agent (45-60 min)**
* Input from the Langdock team about agents
* 1-2 company-specific use cases from users
* Q\&A and sharing learnings
**Week 7: How to find use cases (45-60 min)**
* Input from the Langdock team about finding use cases
* 1-2 company-specific use cases from users
* Q\&A and sharing learnings
* Goal: Get people on board who haven't found their use case yet
The Langdock team is available for 1:1 sessions for individual questions, ideas, and building use cases together.
Advanced rollout support (including 1:1 sessions and workshops) is available starting at 150 licenses.
The largest leverage for internal usage is building and sharing use cases with your user base. Share in the shared channel, check-ins, meetings, and company events to increase momentum and let users learn from each other.
At this point, you'll have champions - ideally 1-2 in each department. They're excited about AI, know how to prompt, and can build use cases. They'll educate and excite others. Keep momentum by maintaining regular exchange and diving into attractive use cases.
### 2. Building out more use cases
After the first phase, onboard more users and build more use cases. The more AI champions you have, the easier this phase becomes.
You will have some examples already, but should also encourage users to find their use cases organically. Below is a framework to do this. You can find more details [here](/administration/identify-use-cases).
**Group Session - Defining Use Cases (45-60 min)**
* Collect 5 tasks that are repetitive and time-consuming
* Prioritize based on impact and feasibility
* Get started with 1-2 individual use cases
**Group Session - Check-In after 1-2 weeks (45-60 min)**
* Share learnings
* Check on use cases: What worked, what didn't work, how did users build them?
Everyone finds use cases that would help in daily work. After the session, everyone experiments and tries to build use cases. Check in with users during this phase. After 1-2 weeks, meet again to share learnings.
### 3. Grow sustainably
Over time, users will organically build agents, share prompts, and use Langdock more. Continue workshops, help users individually, and maintain AI momentum.
You can slowly integrate other tools and work on higher-effort tasks:
* Search integrations to read information from other tools
* Agent actions to read, update, and create data in other tools
* Using the API to access Langdock from other tools
* Agents for building highly individualized workflows
## General measures
You don't need to implement all these initiatives, but some might help maintain momentum:
* **Hackathons or Promptathons** - Teams of 4-5 people work on problems using AI solutions
* **30-day AI challenge** - One small AI task daily to learn and build habits
* **AI newsletter** - Share current developments, internal success stories, and tips
* **Reminder notes** - Sticky notes saying "Use Langdock" to prompt daily AI thinking
* **Knowledge sharing** - Share small tricks 1-2 times weekly (max 2 sentences)
## Signs of a successful rollout
* People are sharing use cases and learnings
* Many people book 1:1 sessions
* Users have questions in check-ins
* Users create pull - they want more content and proactively build use cases
* Successes are celebrated to maintain momentum
* Leadership communicates clear goals and motivation
* **Good KPIs:** Growing active users and increasing prompts sent
## What to avoid
* **Don't build many use cases at once** - Focus on 1-2 first and adopt them properly
* **Don't overengineer the rollout** - Start simple, see what works, then adapt
If you have questions, want rollout support, or need a quote, reach out to [support@langdock.com](mailto:support@langdock.com).
# Rollout Setup
Source: https://docs.langdock.com/administration/rollout-setup
Rolling out AI works differently than rolling out other software tools. Here is the ideal setup we have seen in successful rollouts.
## What is different when rolling out AI compared to other software?
Over the past decades, we have used deterministic software like CRMs, ERP systems, wikis, and word editors. The entire workforce today can use computers and these software tools. With deterministic software, if you click button X, action Y always happens.
Compared to this traditional behavior, if you send the same prompt to an AI model, the response will never be 100% the same. This is because AI models are **stochastic software**.
**The benefit:** AI is highly customizable and can be utilized in every area of your organization.
**The challenge:** It requires education and training of users.
Rolling out AI internally brings the opportunity to improve many processes, and users are excited to try this new software and advance their skills.
## The ideal setup
### Leadership buy-in
Companies with the highest AI adoption and productivity gains have AI deeply embedded in their strategy, with leadership pushing this topic. It's not only owned by IT or a smaller department but by C-level members. They regularly make AI a key priority, create visibility internally and externally, and convince departments to experiment and find use cases.
This internal support makes it easier for everyone involved because it allows experimentation and learning. For leadership, rolling out AI helps future-proof the company and streamline operations.
### Internal AI owner / team
We recommend having at least one AI owner in the company: hire or assign one person whose main job is to adopt AI and improve processes with AI. This is often a Chief AI Officer, innovation manager, or member of the digital team.
This is a great opportunity for both the company and the person taking over this responsibility:
* **For the company:** Investing personnel costs into adopting AI is often more efficient than buying the most expensive tool
* **For the AI owner:** The role shapes processes across the entire company, collaborates with all departments, and often has board exposure since AI is a top priority for leadership
### AI champions
One important job of the AI owner is to find champions in each department. They should learn as much as possible about Langdock and AI and transfer this knowledge to people across different departments.
In the departments, you have people who are power users and very excited about AI. Usually, many people are interested in becoming early adopters of AI. They give feedback, help with building use cases, and bridge between their colleagues in the departments and the AI owner.
# Subscription
Source: https://docs.langdock.com/administration/subscription
Manage your Langdock subscription and billing cycle
In the **Billing** section of the workspace Settings, you can manage your current plan, view billing details, and adjust your subscription settings.
## Pricing Overview
Langdock's pricing is designed to scale with your team's needs. You can find the detailed pricing overview [here](https://www.langdock.com/pricing).
### General Pricing (Chat & Agents)
The core Langdock experience, including Chat and Agents, is billed on a **per-user basis**.
* **Yearly Discount**: Choosing the yearly billing cycle offers a discount of 20% compared to monthly billing.
* **Seat Count Price Reduction**: As your team grows, Langdock offers volume discounts. The price per seat automatically reduces as you reach certain user count thresholds (e.g., above 50 users). This ensures that scaling your deployment remains cost-effective.
### Workflow Subscription
Workflows are billed per **Workspace**.
* **Starter**: Includes a basic allowance of workflow runs (e.g., 2,500 runs/month) and is included with the Chat & Agents plan.
* **Business/Enterprise**: For higher volume needs, you can subscribe to specific Workflow packages (e.g., 40,000 runs/month) which are billed additionally to your user seats.
## Switching from Monthly to Yearly Billing
If you are currently on a monthly plan and wish to switch to a yearly plan to take advantage of the discounted rates, you can do so by resetting your subscription.
To switch to annual billing:
1. Navigate to **Workspace Settings > Billing & Account**.
2. **Cancel** your current monthly subscription.
3. Wait until your current billing period ends.
4. Renew your subscription and select the **Yearly** billing option.
Your data and settings will be preserved during this process.
The invoice will be calculated based on your exact usage. If you cancel mid-cycle, the unused portion of your subscription will be refunded.
# Tips and tricks for your internal channels
Source: https://docs.langdock.com/administration/tips-and-tricks-internal
In this article you'll find little messages containing tips and tricks which you can publish in your internal channels to share little bits of knowledge.
## How to use this resource
When you want to share specific Langdock knowledge with your team, come to this page and copy the messages below directly into your internal messaging tool. Each message is designed to be immediately actionable and includes the technical context your colleagues need to try the feature right away.
We keep each message under 2 minutes to read and implement, so your team can quickly discover useful features without disrupting their workflow.
All messages include direct links to documentation or specific Langdock pages where users can learn more or take immediate action.
Each message either contains:
* **Clear feature explanation** with specific technical details
* **Exact steps** to try the feature immediately
* **Direct links** to relevant documentation or Langdock pages
* **Novice**: First few weeks with Langdock, covering essential basics
* **Regular**: Using Langdock for several weeks, ready for productivity features
* **Poweruser**: Advanced techniques for maximum efficiency
Filter messages by current team priorities:
* Prompting techniques
* Productivity shortcuts
* Langdock features
## Before you start
These messages don't include standard introductions like "Did you know\..." or "Hi there, did you know\..." to give you flexibility in how you share them with your team.
From our experience, adding context for your first message works well. You can use this snippet before sharing your first tip:
"We're starting to share practical AI knowledge over the next few days. Every few days, we'll post a quick tip in this channel covering topics like finding AI use cases, building custom agents, and other productivity techniques. Each one takes under 2 minutes to read and try."
### Intro hooks
If you still want to use some hooks, we prepared a few options:
**Quick Productivity Hook**
"Quick Langdock tip to save you time:"
**Feature Discovery Hook**\
"Here's a Langdock feature that might be useful:"
**Workflow Enhancement Hook**
"To make your AI workflow more efficient:"
**Hidden Gem Hook**
"Langdock feature you might have missed:"
**Problem-Solution Hook**
"If you want to \[work faster/stay organized/get better results] in Langdock:"
***
## Level: Novice
Perfect for teams just beginning their Langdock journey. These messages cover essential platform basics that create immediate value.
### Chat Branching
You can branch your chat in Langdock to explore different ideas without cluttering your conversation. Start by asking a question like "Explain 1" and dive into that topic. When you want to explore something else, go back to your original question and change it to "Explain 3."
To edit a prompt you've already sent, just hover over the message, click the pencil icon, and select "Edit Prompt." Make your changes and save. This way, you can keep your chats organized and focus on what matters most.
### Langdock Command Palette
Quick productivity tip: Press Cmd + K (Mac) or Ctrl + K (Windows) to open Langdock's command palette. This lets you search commands, navigate your workspace, and access features without clicking through menus.
Try it right now: Open Langdock → press Cmd/Ctrl + K → search for any feature.
### Brainstorm Use Cases
Want AI suggestions tailored to your specific role? Try this prompt template in Langdock:
"I am a \[JOB ROLE] at \[COMPANY NAME], a company working in \[PRODUCT/INDUSTRY]. In my daily work, I regularly \[IMPORTANT RECURRING TASK]. I want to improve this work by increasing quality, executing faster, or reducing effort. Please list 10 different ways a large language model can assist with activities related to this task."
This approach generates personalized AI use cases instead of generic suggestions. Give it a try and see what specific ideas come up for your role.
### Selecting a Theme
Langdock offers two visual themes to match your working preferences. To customize your theme, navigate to your account settings and select the Preferences tab.
In the themes section, you have three specific options. Choose "System" to automatically sync with your computer's light or dark mode settings, so Langdock switches themes when your system does.
Alternatively, select "Light" or "Dark" to lock Langdock into your preferred theme regardless of your system settings. This gives you consistent visual experience that won't change when your computer switches between light and dark modes.
Link to account Settings: [https://app.langdock.com/settings/account/preferences](https://app.langdock.com/settings/account/preferences)
***
## Level: Regular
For users comfortable with Langdock basics who are ready to unlock productivity features and workflow improvements.
### Prompt Library Shortcut
Productivity shortcut: Type "@" in any Langdock chat input field to instantly search your saved prompts. This lets you reuse prompts without leaving your current conversation.
Try it: Open any chat → type "@" → search for a saved prompt → select and use it immediately.
More shortcuts:
[https://docs.langdock.com/resources/tricks-and-shortcuts#tricks-and-shortcuts](https://docs.langdock.com/resources/tricks-and-shortcuts#tricks-and-shortcuts)
### Pin Important Chats
Keep your most important Langdock conversations easily accessible: hover over any chat in your sidebar → click the three dots menu → choose "Pin."
Pinned chats stay at the top of your sidebar, so you can return to key projects and ongoing work instantly. Perfect for daily standups, project planning, or frequently referenced conversations.
### Memory Feature
Make your Langdock conversations more personalized with the memory feature. When enabled, Langdock remembers details you share (like your preferred programming language, project context, or work style) and uses this knowledge in future chats.
Enable it:
[https://app.langdock.com/settings/account/memory](https://app.langdock.com/settings/account/memory)
Once enabled, share relevant details about your work, and Langdock will remember them across all future conversations. This saves time on context-setting and makes responses more relevant to your specific needs.
### Compare Model Responses
Want to see how different AI models handle the same question? Click the reload button next to any Langdock response, then choose a different model from the dropdown. Langdock generates a new response using your selected model.
This helps you compare approaches and find which models work best for different types of tasks. Try comparing GPT-4.1 with Claude Sonnet 4 for your next complex question.
### Prompt Variables
You can make your saved prompts more flexible with variables. In Langdock's Prompt Library, wrap any changeable text in double curly braces like variableName. When you use the prompt, Langdock will ask you to fill in those variables.
Example: "Write a documentType for audience about topic"
This lets you reuse one prompt template for multiple situations instead of creating separate prompts for each variation.
### Agent Feedback
Help improve custom Agents in your workspace by providing feedback. For any Agent response, click thumbs up/down for quick feedback, or use the agent menu → "Send feedback" for detailed comments.
If you own an Agent: Check feedback in the Agent builder → three dots menu → Usage insights → Feedback. You can also download feedback data to review patterns and improve your Agent's performance.
This feedback loop helps Agent creators understand what's working and what needs improvement.
### Chat Sharing
Keep your team aligned by sharing important Langdock conversations. Click "Share" in any chat to create a workspace-only link. Anyone in your workspace can view the full conversation, including new messages and attachment names.
Manage all your shared chats:
[https://app.langdock.com/settings/account/shared-chats](https://app.langdock.com/settings/account/shared-chats)
Perfect for sharing research findings, brainstorming sessions, or technical solutions with your team.
### Canvas
You can use Langdock's Canvas tool for collaborative document creation and code editing. Canvas provides a flexible workspace where you can format text, add headings and lists, work on code snippets, and get inline AI suggestions as you write.
Canvas automatically tracks version history and lets you work on documents without switching to external apps. Perfect for brainstorming, documentation, prototyping, or any collaborative writing project.
Learn more:
[https://docs.langdock.com/product/chat/canvas-for-writing](https://docs.langdock.com/product/chat/canvas-for-writing)
### Model Selection Guide
Choose the right AI model for your specific needs with Langdock's detailed model guide. Each model has different strengths: some excel at coding, others at creative writing, analysis, or technical documentation.
Review the guide to understand your options and pick models that match your tasks:
[https://docs.langdock.com/resources/models](https://docs.langdock.com/resources/models)
This helps you get better results by matching the right model to each type of work.
If your workspace uses custom API keys, check with your admin about which models are available, as the public model guide might include models not accessible in your setup.
### Mermaid Diagrams
Create professional diagrams instantly in Langdock by asking for Mermaid diagrams. Simply prompt: "Create a flowchart of our onboarding process" or "Make a diagram showing our API architecture."
Langdock generates the diagram automatically. You can zoom, pan, copy the code, or download the image directly from your chat. No manual formatting required.
Perfect for visualizing workflows, processes, system architecture, or any concept that benefits from a diagram.
Documentation:
[https://docs.langdock.com/product/chat/mermaid](https://docs.langdock.com/product/chat/mermaid)
### Projects
In Langdock, you can organize your chats by grouping them into Projects. This helps you keep track of related conversations, whether you’re working on a marketing campaign, process analysis, or getting ready for a presentation.
Projects get even more useful when you attach documents and extra instructions in the project settings. This way, every chat in the project uses the same information and follows the same guidelines. If you want to learn more about how Projects work, you can check out the details here.
Documentation:
[https://docs.langdock.com/product/navigation/projects](https://docs.langdock.com/product/navigation/projects)
### Hide the sidebar
When you need to focus on your chat in Langdock, you can quickly hide the sidebar with a simple shortcut Press:
`CMD + Shift + S (macOS) `\
`Ctrl + Shift + S (Windows)`
This clears up space so you can concentrate on the conversation. For more handy shortcuts, check the Langdock documentation.
Documentation: [https://docs.langdock.com/resources/tricks-and-shortcuts](https://docs.langdock.com/resources/tricks-and-shortcuts)
### Advanced Prompt Elements
Master prompt engineering with Langdock's five core elements for consistently better responses:
Persona: Assign a specific role ("You are a senior software architect")
Task: Define exactly what you want accomplished
Context: Provide background information, examples, or uploaded documents
Format: Specify output structure, style, tone, and length constraints
Example: "You are a technical writer. Create a user guide for our API authentication system. Use the attached API documentation as context. Format as a step-by-step guide with code examples, using a professional but approachable tone."
Detailed guide:
[https://docs.langdock.com/resources/prompt-elements](https://docs.langdock.com/resources/prompt-elements)
***
## Level: Poweruser
For experienced users ready to master advanced prompting techniques and maximize their Langdock efficiency.
### Custom Instructions
Customize how Langdock responds to you with Custom Instructions. Set up details about your role, communication preferences, and work style so every response is tailored to your needs.
Enable: Settings → Individual Preferences → Custom Instructions
Example instruction: "Always end responses with 4 numbered follow-up questions. Let me choose one by entering the number, then provide a detailed answer to my selection in the next message."
This creates a consistent, personalized experience across all your Langdock conversations.
Full guide:
[https://docs.langdock.com/resources/custom-instructions#custom-instructions](https://docs.langdock.com/resources/custom-instructions#custom-instructions)
### Precise Citation Responses
Make your Langdock conversations more precise by highlighting specific parts of any response and clicking the citation button. This lets you respond directly to particular sections instead of the entire message.
Perfect for:
Asking follow-up questions about specific points
Requesting clarification on particular details
Building on specific ideas while ignoring others
This keeps conversations focused and helps you dig deeper into exactly what matters most.
### Instant New Chat
Start fresh conversations instantly: Press Cmd + Shift + O (Mac) or Ctrl + Shift + O (Windows) to open a new chat without navigating through menus.
Perfect for quickly switching topics, starting focused discussions, or beginning new projects while keeping your current conversation intact.
All keyboard shortcuts:
[https://docs.langdock.com/resources/tricks-and-shortcuts](https://docs.langdock.com/resources/tricks-and-shortcuts)
# Usage Exports
Source: https://docs.langdock.com/administration/usage-exports
Export detailed usage analytics from your Langdock workspace to CSV format for external analysis, reporting, and compliance purposes.
Usage exports are available to workspace administrators and provide up to 12 months of historical data across users, agents, workflows, projects, and models.
## Accessing Usage Exports
Navigate to your workspace analytics page and click the **Export** button in the top right corner to open the export configuration dialog.
Go to your [workspace analytics page](https://app.langdock.com/settings/workspace/analytics) in workspace settings.
Select the **Export** button located in the top right corner
Choose your data type and date range in the export dialog
Click **Generate CSV** to create and download your export file
## Export Configuration
### Data Types
Select which type of usage data to export:
* **Users** - Individual user activity, message counts, and feature usage across the selected time period
* **Agents** - Agent usage statistics, interaction counts, and performance metrics
* **Workflows** - Workflow execution data and usage patterns
Only available when the workflow product is enabled in your workspace (currently in closed beta; will be launched soon to all users)
* **Projects** - Project-level usage data
* **Models** - Model usage statistics and token consumption data
### Date Range Options
Choose from predefined ranges or select a custom period:
* **This month** (e.g., July 2025)
* **Last month** (e.g., June 2025)
* **This week** (e.g., July 30 - August 2)
* **Last week** (e.g., July 20 - July 26)
* **Choose custom range** - Select specific start and end dates
Historical data is limited to 12 months. You cannot export data older than 12 months from the current date.
## Export Data Structure
Each export generates a CSV file with one row per entity (user, agent, workflow, project, or model) and columns containing relevant metrics for the selected time period.
## Usage Exports
The Users export provides data about individual user activity within your workspace.
### Column Definitions
| Column | Description |
| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `period_start` | Start date of the report (YYYY-MM-DD, UTC) |
| `period_end` | End date of the report (YYYY-MM-DD, UTC) |
| `org_id` | ID of the workspace |
| `user_id` | ID of the user |
| `name` | Name of the user |
| `email` | Email of the user |
| `role` | Role of the user (member, editor, or admin) at the time of the export |
| `joined_at` | Date the user joined the workspace (YYYY-MM-DD, UTC) |
| `messages_total` | Total number of messages the user has sent in the period |
| `messages_total_rank` | Relative position of the user in a list of all workspace users sorted by total messages in the period (1 = most messages) |
| `messages_chat` | Number of messages the user has sent in regular chats in the period |
| `messages_chat_rank` | Relative position of the user in a list of all workspace users sorted by chat messages in the period (1 = most messages) |
| `messages_agents` | Number of messages the user has sent to agent chats in the period |
| `messages_agents_rank` | Relative position of the user in a list of all workspace users sorted by agent messages in the period (1 = most messages) |
| `agents_messaged` | Number of distinct agents the user messaged in the period |
| `agents_to_messages` | JSON object mapping agent\_id to messages\_count for each agent the user messaged in the period |
| `messages_projects` | Number of messages the user has sent to project chats in the period |
| `messages_projects_rank` | Relative position of the user in a list of all workspace users sorted by project messages in the period (1 = most messages) |
| `projects_messaged` | Number of distinct projects the user messaged in the period |
| `projects_to_messages` | JSON object mapping project\_id to messages\_count for each project the user messaged in the period |
| `model_to_messages_total` | JSON object mapping model\_name to messages\_count for each model the user messaged in the period |
| `action_messages` | Number of messages from the user generated by actions in the period |
| `action_messaged` | Number of distinct actions the user triggered in the period |
| `action_to_messages` | JSON object mapping action\_name to messages\_count for each action the user triggered in the period. Actions include capabilities like Canvas or Web Search as well as actions from integrations |
For workspaces with user-level data disabled, certain identifying columns (like user email and name) are excluded from exports to maintain privacy compliance.
The Agents export shows usage statistics for each agent in your workspace.
### Column Definitions
| Column | Description |
| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| `period_start` | Start date of the report (YYYY-MM-DD, UTC) |
| `period_end` | End date of the report (YYYY-MM-DD, UTC) |
| `org_id` | ID of the workspace |
| `agent_id` | ID of the agent |
| `agent_name` | Name of the agent at the time of the export |
| `agent_description` | Description of the agent at the time of the export |
| `agent_url` | URL of the agent |
| `agent_owner_id` | ID of the user who is the owner of the agent at the time of the export |
| `agent_owner_email` | Email of the user who is the owner of the agent at the time of the export |
| `messages` | Number of messages sent to this agent in the period |
| `unique_users` | Number of distinct users who sent at least 1 message to this agent in the period |
| `sum_prompt_tokens` | Sum of all prompt tokens used by this agent in the period. Only for customers who bring their own model keys |
| `avg_prompt_tokens` | Average number of prompt tokens per request for this agent in the period. Only for customers who bring their own model keys |
| `min_prompt_tokens` | Smallest number of prompt tokens used per request by this agent in the period. Only for customers who bring their own model keys |
| `max_prompt_tokens` | Largest number of prompt tokens used per request by this agent in the period. Only for customers who bring their own model keys |
| `sum_completion_tokens` | Total completion tokens used by this agent in the period. Only for customers who bring their own model keys |
| `avg_completion_tokens` | Average number of completion tokens used per request by this agent in the period. Only for customers who bring their own model keys |
| `min_completion_tokens` | Smallest number of completion tokens used per request by this agent in the period. Only for customers who bring their own model keys |
| `max_completion_tokens` | Largest number of completion tokens used per request by this agent in the period. Only for customers who bring their own model keys |
For workspaces with user-level data disabled, certain identifying columns (like user email and name) are excluded from exports to maintain privacy compliance.
The Workflows export provides execution data for automated workflows in your workspace.
This export is only available when the workflow product is enabled in your workspace.
### Column Definitions
| Column | Description |
| ----------------------- | --------------------------------------------------------------------------------------------------------------- |
| `period_start` | Start date of the report (YYYY-MM-DD, UTC) |
| `period_end` | End date of the report (YYYY-MM-DD, UTC) |
| `org_id` | ID of the workspace |
| `workflow_id` | ID of the workflow |
| `workflow_name` | Name of the workflow at the time of the export |
| `workflow_url` | URL of the workflow |
| `workflow_owner_id` | ID of the user who is the owner of the workflow at the time of the export |
| `workflow_owner_email` | Email of the user who is the owner of the workflow at the time of the export |
| `tasks` | Number of executions of this workflow in the period |
| `steps` | Number of executed steps (across all tasks) of this workflow in the period |
| `sum_prompt_tokens` | Sum of all prompt tokens used by this workflow in the period. Only for customers who bring their own model keys |
| `sum_completion_tokens` | Total completion tokens used by this workflow in the period. Only for customers who bring their own model keys |
For workspaces with user-level data disabled, certain identifying columns (like user email and name) are excluded from exports to maintain privacy compliance.
The Projects export shows usage data for collaborative projects in your workspace.
### Column Definitions
| Column | Description |
| --------------------- | ------------------------------------------------------------ |
| `period_start` | Start date of the report (YYYY-MM-DD, UTC) |
| `period_end` | End date of the report (YYYY-MM-DD, UTC) |
| `org_id` | ID of the workspace |
| `project_id` | ID of the project |
| `project_name` | Name of the project at the time of the export |
| `project_owner_id` | ID of the user who is the owner of the project |
| `project_owner_email` | Email of the user who is the owner of the project |
| `messages` | Number of messages within chats of the project in the period |
For workspaces with user-level data disabled, certain identifying columns (like user email and name) are excluded from exports to maintain privacy compliance.
The Models export provides usage statistics for each AI model used in your workspace.
### Column Definitions
| Column | Description |
| ----------------------- | ---------------------------------------------------------------------------------------------------------------------- |
| `period_start` | Start date of the report (YYYY-MM-DD, UTC) |
| `period_end` | End date of the report (YYYY-MM-DD, UTC) |
| `org_id` | ID of the workspace |
| `name` | Name of the model at the time of the export |
| `requests` | Number of requests sent to this model across all products in the period |
| `sum_prompt_tokens` | Sum of all prompt tokens sent to this model in the period. Only for customers who bring their own model keys |
| `sum_completion_tokens` | Sum of all completions tokens generated by this model in the period. Only for customers who bring their own model keys |
For workspaces with user-level data disabled, certain identifying columns (like user email and name) are excluded from exports to maintain privacy compliance.
## Actions Referenced in User Export
The following actions may appear in the `action_to_messages` column of the Users export:
Used to edit documents and code in a structured interface. Messages represent user prompts that trigger Canvas interactions (creation, querying, deletion).
Used to search external web sources for real-time information. Messages represent user prompts that trigger web search functionality.
Used to analyze uploaded documents, spreadsheets, and other file types. Messages represent user prompts that trigger file processing and analysis.
Used to run and test code in various programming languages. Messages represent user prompts that trigger code execution requests.
Actions from connected integrations (e.g., `Hubspot_create_contact`, `Slack_send_message`). Messages represent user prompts that trigger API calls to external services through Langdock integrations.
## Data Handling and Privacy
### Null Values
Empty or unavailable data fields are handled as follows:
* Numeric fields: Display as `0` or empty
* Text fields: Display as empty strings
* JSON objects: Display as empty objects `{}`
### Data Retention
* Historical data is available for up to 12 months
* Export data reflects the state at the time of export
* User roles, names, and other attributes show values as of the period end date
### BYOK Considerations
For customers using Bring Your Own Keys (BYOK):
* Token usage data is available with detailed metrics
* All standard columns plus token-specific columns are included
# Workspace Setup
Source: https://docs.langdock.com/administration/workspace-setup
Here are all the steps to customize your Langdock workspace. This guide takes you from a blank, new workspace to being ready for onboarding your teams and rolling out AI in your company. All steps are optional, but we recommend customizing the workspace to suit your technical and usability needs.
## Security: SAML and SCIM
If you're using an identity access management (IAM) solution like Microsoft Entra or Okta, the first step is to set up SAML 2.0 and SCIM. You can do this in the [security settings](https://app.langdock.com/settings/workspace/security) and can find dedicated guides [here](/settings/security/entra).
## Member, Group and Role Settings
**Members:** Manage your members in the [member settings](https://app.langdock.com/settings/workspace/user-management/members) to change user roles or invite users (if you're not using an identity access solution).
**Roles:** Configure permissions for different roles in the [roles settings](https://app.langdock.com/settings/workspace/user-management/roles). We recommend starting with the default configuration and letting all users (except admins) have "member" permission initially. This can be changed at any time. You can find an overview of recommended permissions based on our customers' rollouts [here](/administration/permissions#permission-recommendations).
**Groups (optional):** Create groups to reflect different project teams or departments in the [Groups settings](https://app.langdock.com/settings/workspace/user-management/groups). You can also sync groups from your identity access management solution.
## General Workspace customization
Add a workspace icon, rename the workspace to your company name, and add a description in the [general settings](https://app.langdock.com/settings/workspace/general). The company description is sent to the model with every prompt from every user and helps the model understand your company context.
## Further customization
You can further customize your Langdock workspace appearance to become your company-branded AI solution in the [customization menu](https://app.langdock.com/settings/workspace/customizations). Here are the most important settings:
**Brand color:** Add your company's hex code to change the color of primary buttons and highlights. We recommend avoiding colors that are too bright or too dark to work well with both dark and light mode.
**Chat disclaimer:** Many workspaces add a chat disclaimer to communicate legal disclaimers or that AI models can make mistakes. The disclaimer appears below the chat input field.
**Info boxes:** In empty chats, you can display up to three info boxes with tricks and hints. You can also link to your internal wiki if you have written documentation.
**Custom links:** In the sidebar, you can add custom links below the agents, prompt library, and integrations sections. These links can forward to your documentation.
## Filling the prompt library and agent library
To give your users a great start with example use cases, you can add agents or prompts to the workspace. This could include an internal help chatbot, frequently-used prompts like the email prompt, grammar corrector, or translator. You can find inspiration [here](https://docs.langdock.com/resources/agent-templates). Present these in workshops and check-ins with your users.
# Agents Completions API
Source: https://docs.langdock.com/api-endpoints/agent/agent
POST /agent/v1/chat/completions
Creates a model response for a given Agent using Vercel AI SDK compatible format.
Creates a model response for a given agent ID, or pass in an Agent configuration that should be used for your request. This endpoint uses the Vercel AI SDK compatible message format for seamless integration with modern AI applications.
To share an agent with an API key, follow [this guide](/api-endpoints/agent/agent-api-guide)
**Vercel AI SDK Compatible**: This endpoint uses the Vercel AI SDK's UIMessage format, making it compatible with the `useChat` hook and other Vercel AI SDK features.
## Base URL
```
https://api.langdock.com/agent/v1/chat/completions
```
For dedicated deployments, use `https:///api/public/agent/v1/chat/completions` instead.
## Request Parameters
| Parameter | Type | Required | Description |
| ---------- | ------- | ----------------------------- | ------------------------------------------------- |
| `agentId` | string | One of agentId/agent required | ID of an existing agent to use |
| `agent` | object | One of agentId/agent required | Configuration for a temporary agent |
| `messages` | array | Yes | Array of UIMessage objects (Vercel AI SDK format) |
| `stream` | boolean | No | Enable streaming responses (default: false) |
| `output` | object | No | Structured output format specification |
## Message Format (Vercel AI SDK UIMessage)
The Agents API uses the Vercel AI SDK's UIMessage format for maximum compatibility with modern AI frameworks.
### UIMessage Structure
Each message in the `messages` array should contain:
```typescript theme={null}
interface UIMessage {
id: string; // Unique identifier for this message
role: 'user' | 'assistant' | 'system' | 'tool';
parts: MessagePart[]; // Array of message parts
}
interface MessagePart {
type: 'text' | 'file' | 'tool-invocation' | 'tool-result';
// For text parts
text?: string;
// For file parts
url?: string; // Format: "attachment://uuid"
name?: string;
mimeType?: string;
// For tool parts
toolCallId?: string;
toolName?: string;
args?: object;
result?: any;
}
```
### Example Messages
#### User Message with Text
```javascript theme={null}
{
id: "msg_1",
role: "user",
parts: [
{
type: "text",
text: "Hello, how are you?"
}
]
}
```
#### User Message with Attachment
```javascript theme={null}
{
id: "msg_2",
role: "user",
parts: [
{
type: "text",
text: "Please analyze this document"
},
{
type: "file",
url: "attachment://550e8400-e29b-41d4-a716-446655440000",
name: "document.pdf",
mimeType: "application/pdf"
}
]
}
```
#### Assistant Message with Tool Call
```javascript theme={null}
{
id: "msg_3",
role: "assistant",
parts: [
{
type: "tool-invocation",
toolCallId: "call_123",
toolName: "web_search",
args: {
query: "latest news"
}
}
]
}
```
## Agent Configuration
When creating a temporary agent using the `agent` parameter, you can specify:
* `name` (required) - Name of the agent (max 64 chars)
* `instructions` (required) - System instructions (max 16384 chars)
* `description` - Optional description (max 256 chars)
* `temperature` - Temperature between 0-1
* `model` - Model ID to use (see [Available Models](/api-endpoints/agent/agent-models) for options)
* `capabilities` - Enable features like web search, data analysis, image generation
* `actions` - Custom API integrations
* `vectorDb` - Vector database connections
* `knowledgeFolderIds` - IDs of knowledge folders to use
* `attachmentIds` - Array of UUID strings identifying attachments to use
You can retrieve a list of available models using the [Models API](/api-endpoints/agent/agent-models).
## Using Tools via API
When an agent has tools configured (called "Actions" in the Langdock UI), it will automatically use them to respond to API requests when appropriate.
The connection must be set to "preselected connection" (shared with other users) for tool authentication to work.
Tools with **"Require human confirmation"** enabled do not work via API—they require manual approval in the Langdock UI. To use a tool via API, disable this setting in the agent configuration.
## Structured Output
You can specify a structured output format using the optional `output` parameter:
| Field | Type | Description |
| -------- | ----------------------------- | -------------------------------------------------------------- |
| `type` | "object" \| "array" \| "enum" | The type of structured output |
| `schema` | object | JSON Schema definition for the output (for object/array types) |
| `enum` | string\[] | Array of allowed values (for enum type) |
The `output` parameter behavior depends on the specified type:
* `type: "object"` with no schema: Forces the response to be a single JSON object (no specific structure)
* `type: "object"` with schema: Forces the response to match the provided JSON Schema
* `type: "array"` with schema: Forces the response to be an array of objects matching the provided schema
* `type: "enum"`: Forces the response to be one of the values specified in the `enum` array
You can use tools like [easy-json-schema](https://easy-json-schema.github.io/) to generate JSON Schemas from example JSON objects.
## Streaming Responses
When `stream` is set to `true`, the API returns a stream using the Vercel AI SDK streaming format, compatible with the `useChat` hook and other Vercel AI SDK features.
### Using with Vercel AI SDK useChat Hook
```typescript theme={null}
'use client';
import { useChat } from '@ai-sdk/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: 'https://api.langdock.com/agent/v1/chat/completions',
headers: {
'Authorization': `Bearer ${process.env.NEXT_PUBLIC_LANGDOCK_API_KEY}`
},
body: {
agentId: 'your-agent-id'
}
});
return (
);
}
```
### Manual Stream Handling
```javascript theme={null}
const response = await fetch('https://api.langdock.com/agent/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json',
},
body: JSON.stringify({
agentId: 'agent_123',
messages: [
{
id: 'msg_1',
role: 'user',
parts: [{ type: 'text', text: 'Hello' }]
}
],
stream: true
}),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
console.log(chunk); // Process streaming chunks
}
```
## Obtaining Attachment IDs
To use attachments in your agent conversations, you first need to upload the files using the [Upload Attachment API](/api-endpoints/agent/upload-attachments). This will return an `attachmentId` for each file, which you can then reference in message parts with `type: "file"`.
## Response Format
The API returns a UIMessage object following the Vercel AI SDK format:
```typescript theme={null}
{
id: string;
role: "assistant";
parts: Array<{
type: "text" | "tool-invocation" | "tool-result";
text?: string;
toolCallId?: string;
toolName?: string;
args?: object;
result?: any;
}>;
// Structured output - included when requested
output?: object | array | string;
}
```
### Standard Response
The message contains the agent's response in the `parts` array. This can include:
* Text responses
* Tool invocations
* Tool results
### Structured Output
When the request includes an `output` parameter, the response will automatically include an `output` field containing the formatted structured data. The type of this field depends on the requested output format:
* If `output.type` was "object": Returns a JSON object (with schema validation if schema was provided)
* If `output.type` was "array": Returns an array of objects matching the provided schema
* If `output.type` was "enum": Returns a string matching one of the provided enum values
## Examples
### Using an Existing Agent
```javascript theme={null}
const response = await fetch(
"https://api.langdock.com/agent/v1/chat/completions",
{
method: "POST",
headers: {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
body: JSON.stringify({
agentId: "agent_123",
messages: [
{
id: "msg_1",
role: "user",
parts: [
{
type: "text",
text: "Can you analyze this document for me?"
},
{
type: "file",
url: "attachment://550e8400-e29b-41d4-a716-446655440000"
}
]
}
]
})
}
);
const data = await response.json();
const responseText = data.parts.find(p => p.type === 'text')?.text;
console.log(responseText);
```
### Using a Temporary Agent Configuration
```javascript theme={null}
const response = await fetch(
"https://api.langdock.com/agent/v1/chat/completions",
{
method: "POST",
headers: {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
body: JSON.stringify({
agent: {
name: "Document Analyzer",
instructions: "You are a helpful agent who analyzes documents and answers questions about them",
temperature: 0.7,
model: "gpt-4",
capabilities: {
webSearch: true,
dataAnalyst: true
},
attachmentIds: ["550e8400-e29b-41d4-a716-446655440000"]
},
messages: [
{
id: "msg_1",
role: "user",
parts: [
{
type: "text",
text: "What are the key points in the document?"
}
]
}
]
})
}
);
const data = await response.json();
console.log(data);
```
### Using Structured Output with Schema
```javascript theme={null}
const response = await fetch(
"https://api.langdock.com/agent/v1/chat/completions",
{
method: "POST",
headers: {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
body: JSON.stringify({
agent: {
name: "Weather Agent",
instructions: "You are a helpful weather agent",
model: "gpt-4",
capabilities: {
webSearch: true
}
},
messages: [
{
id: "msg_1",
role: "user",
parts: [
{
type: "text",
text: "What's the weather in Paris, Berlin and London today?"
}
]
}
],
output: {
type: "array",
schema: {
type: "object",
properties: {
weather: {
type: "object",
properties: {
city: { type: "string" },
tempInCelsius: { type: "number" },
tempInFahrenheit: { type: "number" }
},
required: ["city", "tempInCelsius", "tempInFahrenheit"]
}
}
}
}
})
}
);
const data = await response.json();
console.log(data.output);
// Output:
// [
// { "weather": { "city": "Paris", "tempInCelsius": 1, "tempInFahrenheit": 33 } },
// { "weather": { "city": "Berlin", "tempInCelsius": 1, "tempInFahrenheit": 35 } },
// { "weather": { "city": "London", "tempInCelsius": 7, "tempInFahrenheit": 45 } }
// ]
```
### Using with Next.js Server Actions
```typescript theme={null}
// app/actions.ts
'use server';
import { generateId } from 'ai';
export async function chatWithAgent(message: string) {
const response = await fetch(
'https://api.langdock.com/agent/v1/chat/completions',
{
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.LANGDOCK_API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
agentId: process.env.AGENT_ID,
messages: [
{
id: generateId(),
role: 'user',
parts: [
{
type: 'text',
text: message
}
]
}
]
})
}
);
const data = await response.json();
return data.parts.find(p => p.type === 'text')?.text;
}
```
## Rate Limits
The rate limit for the Agents Completions endpoint is **500 RPM (requests per minute)** and **60,000 TPM (tokens per minute)**. Rate limits are defined at the workspace level - not at an API key level. Each model has its own rate limit. If you exceed your rate limit, you will receive a `429 Too Many Requests` response.
Please note that the rate limits are subject to change. Refer to this documentation for the most up-to-date information. If you need a higher rate limit, please contact us at [support@langdock.com](mailto:support@langdock.com).
## Error Handling
```javascript theme={null}
try {
const response = await fetch('https://api.langdock.com/agent/v1/chat/completions', options);
if (!response.ok) {
const error = await response.json();
throw new Error(error.message || 'Request failed');
}
const data = await response.json();
// Process response
} catch (error) {
console.error('Error:', error.message);
}
```
Common error status codes:
* `400` - Invalid request parameters or malformed message format
* `401` - Invalid or missing API key
* `403` - Insufficient permissions or agent not shared with API key
* `404` - Agent not found
* `429` - Rate limit exceeded
* `500` - Server error
## Migrating from Assistants API
If you're migrating from the older Assistants API, see our [comprehensive migration guide](/api-endpoints/assistant/assistant-to-agent-migration) which covers:
* Message format changes
* Request/response structure differences
* Code migration examples
* Vercel AI SDK integration patterns
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Sharing Agents with API Keys
Source: https://docs.langdock.com/api-endpoints/agent/agent-api-guide
Learn how to create an API key in Langdock and share an agent with it for programmatic access.
This is the new Agents API with native Vercel AI SDK compatibility. If you're using the legacy Assistants API, see the [migration guide](/api-endpoints/assistant/assistant-to-agent-migration).
An admin needs to create the API key and share the agent with it. If you're not an admin, invite one as an editor to your agent using the "Share" button.
## How to create an API key
1. Navigate to [Langdock](https://app.langdock.com) and open the workspace settings from the dropdown menu.
2. Click on **API** under Products in the sidebar.
3. Click **Create API key**, enter a name, select the required scopes (at minimum "Agent API"), and confirm.
4. Copy your API key and store it securely. You won't be able to view it again.
## How to share an agent with the API key
1. Navigate to **Agents** in the sidebar.
2. Create a new agent or select an existing one. Enter at least a name to save it.
3. In the agent editor, click the **Share** button in the top right corner.
4. The share dialog opens showing current access settings.
5. Search for your API key by name and add it to share the agent with the API.
Only admins can connect an agent with an API key. If you don't see API keys in the share menu, ask an admin to perform this step.
## Testing the API connection
Once shared, you can test your agent via the [Agent API documentation](/api-endpoints/agent/agent). Use your API key and the agent ID from the URL (`https://app.langdock.com/agents/AGENT_ID/edit`).
Langdock blocks browser-origin requests to protect your API key. For more information, see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Agent Create API
Source: https://docs.langdock.com/api-endpoints/agent/agent-create
POST /agent/v1/create
Create a new agent programmatically
This is the new Agents API with native Vercel AI SDK compatibility. If you're using the legacy Assistants API, see the [migration guide](/api-endpoints/assistant/assistant-to-agent-migration).
Creates a new agent in your workspace programmatically. The created agent can be used via the chat completions endpoint or accessed through the Langdock UI.
Requires an API key with the `AGENT_API` scope. Created agents are automatically shared with the API key for use in chat completions.
## Request Parameters
| Parameter | Type | Required | Description |
| ---------------------- | --------- | -------- | ----------------------------------------------------------- |
| `name` | string | Yes | Name of the agent (1-255 characters) |
| `description` | string | No | Description of what the agent does (max 256 chars) |
| `emoji` | string | No | Emoji icon for the agent (e.g., "🤖") |
| `instruction` | string | No | System prompt/instructions for the agent (max 16384 chars) |
| `inputType` | string | No | Input type: "PROMPT" or "STRUCTURED" (default: "PROMPT") |
| `model` | string | No | Model UUID to use (uses workspace default if not provided) |
| `creativity` | number | No | Temperature between 0-1 (default: 0.3) |
| `conversationStarters` | string\[] | No | Array of suggested prompts to help users get started |
| `actions` | array | No | Array of action objects for custom integrations |
| `inputFields` | array | No | Array of form field definitions (for STRUCTURED input type) |
| `attachments` | string\[] | No | Array of attachment UUIDs to include with the agent |
| `webSearch` | boolean | No | Enable web search capability (default: false) |
| `imageGeneration` | boolean | No | Enable image generation capability (default: false) |
| `dataAnalyst` | boolean | No | Enable code interpreter capability (default: false) |
| `canvas` | boolean | No | Enable canvas capability (default: false) |
### Actions Configuration
Each action in the `actions` array should contain:
* `actionId` (required) - UUID of the action from an enabled integration
* `requiresConfirmation` (optional) - Whether to require user confirmation before executing (default: false)
Only actions from integrations enabled in your workspace can be used.
### Input Fields Configuration
When using `inputType: "STRUCTURED"`, you can define form fields in the `inputFields` array:
| Field | Type | Required | Description |
| ------------- | --------- | -------- | ---------------------------------------------- |
| `slug` | string | Yes | Unique identifier for the field |
| `type` | string | Yes | Field type (see supported types below) |
| `label` | string | Yes | Display label for the field |
| `description` | string | No | Help text for the field |
| `required` | boolean | No | Whether the field is required (default: false) |
| `order` | number | Yes | Display order (0-indexed) |
| `options` | string\[] | No | Options for SELECT type fields |
| `fileTypes` | string\[] | No | Allowed file types for FILE type fields |
**Supported Field Types:**
* `TEXT` - Single line text input
* `MULTI_LINE_TEXT` - Multi-line text area
* `NUMBER` - Numeric input
* `CHECKBOX` - Boolean checkbox
* `FILE` - File upload
* `SELECT` - Dropdown selection
* `DATE` - Date picker
## Obtaining Attachment IDs
To include attachments with your agent, first upload files using the [Upload Attachment API](/api-endpoints/agent/upload-attachments). This will return attachment UUIDs that you can include in the `attachments` array.
## Examples
### Creating a Basic Agent
```javascript theme={null}
const axios = require("axios");
async function createBasicAgent() {
const response = await axios.post(
"https://api.langdock.com/agent/v1/create",
{
name: "Document Analyzer",
description: "Analyzes and summarizes documents",
emoji: "📄",
instruction: "You are a helpful agent that analyzes documents and provides clear summaries of key information.",
creativity: 0.5,
conversationStarters: [
"Summarize this document",
"What are the key points?",
"Extract action items"
],
dataAnalyst: true,
webSearch: false
},
{
headers: {
Authorization: "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
}
}
);
console.log("Agent created:", response.data.agent.id);
}
```
## Validation Rules
The API enforces several validation rules:
* **Model** - Must be in your workspace's active models list
* **Actions** - Must belong to integrations enabled in your workspace
* **Attachments** - Must exist in your workspace and not be deleted
* **Permissions** - Your API key must have the `createAgents` permission
* **Name** - Must be between 1-255 characters
* **Description** - Maximum 256 characters
* **Instruction** - Maximum 16384 characters
* **Creativity** - Must be between 0 and 1
## Important Notes
Pre-selected OAuth connections are not supported via the API. Users must configure OAuth connections through the Langdock UI.
* Created agents are automatically shared with your API key for use in chat completions
* The API key creator becomes the owner and can manage the agent in the UI
* Attachments are bidirectionally linked to the agent
* The agent type is set to `AGENT` (not `WORKFLOW` or `PROJECT`)
* `createdBy` and `workspaceId` are automatically set from your API key
## Response Format
### Success Response (201 Created)
```typescript theme={null}
{
status: "success";
message: "Agent created successfully";
agent: {
id: string;
name: string;
description: string;
instruction: string;
emojiIcon: string;
model: string;
temperature: number;
conversationStarters: string[];
inputType: "PROMPT" | "STRUCTURED";
webSearchEnabled: boolean;
imageGenerationEnabled: boolean;
codeInterpreterEnabled: boolean;
canvasEnabled: boolean;
actions: Array<{
actionId: string;
requiresConfirmation: boolean;
}>;
inputFields: Array<{
slug: string;
type: string;
label: string;
description: string;
required: boolean;
order: number;
options: string[];
fileTypes: string[] | null;
}>;
attachments: string[];
createdAt: string;
updatedAt: string;
};
}
```
## Error Handling
```typescript theme={null}
try {
const response = await axios.post('https://api.langdock.com/agent/v1/create', ...);
} catch (error) {
if (error.response) {
switch (error.response.status) {
case 400:
console.error('Invalid parameters:', error.response.data.message);
break;
case 401:
console.error('Invalid or missing API key');
break;
case 403:
console.error('Insufficient permissions - requires AGENT_API scope');
break;
case 404:
console.error('Resource not found (model, action, or attachment)');
break;
case 500:
console.error('Server error');
break;
}
}
}
```
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Agent Get API
Source: https://docs.langdock.com/api-endpoints/agent/agent-get
GET /agent/v1/get
Retrieve details of an existing agent
This is the new Agents API with native Vercel AI SDK compatibility. If you're using the legacy Assistants API, see the [migration guide](/api-endpoints/assistant/assistant-to-agent-migration).
Retrieves the complete configuration and details of an existing agent in your workspace.
Requires an API key with the `AGENT_API` scope and access to the agent you want to retrieve.
## Query Parameters
| Parameter | Type | Required | Description |
| --------- | ------ | -------- | ----------------------------- |
| `agentId` | string | Yes | UUID of the agent to retrieve |
## Examples
### Basic Retrieval
```javascript theme={null}
const axios = require("axios");
async function getAgent() {
const response = await axios.get(
"https://api.langdock.com/agent/v1/get",
{
params: {
agentId: "550e8400-e29b-41d4-a716-446655440000"
},
headers: {
Authorization: "Bearer YOUR_API_KEY"
}
}
);
console.log("Agent details:", response.data.agent);
}
```
## Validation Rules
The API enforces the following validation rules:
* **Agent access** - Your API key must have access to the agent
* **Workspace match** - Agent must belong to the same workspace as your API key
## Response Format
### Success Response (200 OK)
```typescript theme={null}
{
status: "success";
agent: {
id: string;
name: string;
description: string;
instruction: string;
emojiIcon: string;
model: string;
temperature: number;
conversationStarters: string[];
inputType: "PROMPT" | "STRUCTURED";
webSearchEnabled: boolean;
imageGenerationEnabled: boolean;
codeInterpreterEnabled: boolean;
canvasEnabled: boolean;
actions: Array<{
actionId: string;
requiresConfirmation: boolean;
}>;
inputFields: Array<{
slug: string;
type: string;
label: string;
description: string;
required: boolean;
order: number;
options: string[];
fileTypes: string[] | null;
}>;
attachments: string[];
createdAt: string;
updatedAt: string;
};
}
```
## Error Handling
```typescript theme={null}
try {
const response = await axios.get('https://api.langdock.com/agent/v1/get', ...);
} catch (error) {
if (error.response) {
switch (error.response.status) {
case 400:
console.error('Invalid agent ID format');
break;
case 401:
console.error('Invalid or missing API key');
break;
case 403:
console.error('Insufficient permissions - no access to this agent');
break;
case 404:
console.error('Agent not found');
break;
case 500:
console.error('Server error');
break;
}
}
}
```
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Agent Migration Guide
Source: https://docs.langdock.com/api-endpoints/agent/agent-migration
How to migrate agents between Langdock workspaces using the Agents API
This is the new Agents API with native Vercel AI SDK compatibility. If you're using the legacy Assistants API, see the [migration guide](/api-endpoints/assistant/assistant-to-agent-migration).
This guide explains how to migrate agents from one Langdock workspace to another using the Agents API. This is useful when you want to replicate agent configurations across different workspaces, move agents during organizational restructuring, or create backup copies of your agents.
## Overview
The migration process involves two main steps:
1. **Export**: Retrieve the agent configuration from the source workspace using the [Agent Get API](/api-endpoints/agent/agent-get)
2. **Import**: Create a new agent in the target workspace using the [Agent Create API](/api-endpoints/agent/agent-create)
## Prerequisites
Before you begin, ensure you have:
1. **`Two API keys with AGENT_API scope`**:
* One API key for the **source workspace** (where the agent currently exists)
* One API key for the **target workspace** (where you want to migrate the agent)
2. **Access to the agent**: Your source workspace API key must have access to the agent you want to migrate
3. **Matching resources in target workspace** (if applicable):
* The model will need to be manually adjusted in the Langdock UI after migration
* If the agent uses custom actions, those integrations must be enabled in the target workspace
* Attachments need to be re-uploaded separately (they are not transferred automatically)
## Step 1: Export the Agent from Source Workspace
Use the **Agent Get API** to retrieve the complete configuration of your agent:
```javascript theme={null}
const axios = require("axios");
async function getAgentFromSource(agentId, sourceApiKey) {
const response = await axios.get(
"https://api.langdock.com/agent/v1/get",
{
params: {
agentId: agentId
},
headers: {
Authorization: `Bearer ${sourceApiKey}`
}
}
);
return response.data.agent;
}
```
### Understanding the Response
The Get API returns the complete agent configuration including:
* `name`, `description`, `instruction` - The agent's identity and system prompt
* `emojiIcon` - The emoji icon displayed for the agent
* `model` - UUID of the model being used
* `temperature` - Creativity setting (0-1)
* `conversationStarters` - Suggested prompts for users
* `inputType` - Either "PROMPT" or "STRUCTURED"
* `inputFields` - Form field definitions (for STRUCTURED input type)
* `webSearchEnabled`, `imageGenerationEnabled`, `codeInterpreterEnabled`, `canvasEnabled` - Capability flags
* `actions` - Custom integration actions
* `attachments` - UUIDs of attached files
## Step 2: Transform the Configuration
The Get API response uses slightly different field names than the Create API expects. You need to map the fields:
```javascript theme={null}
function transformForCreate(sourceAgent) {
return {
// Basic information
name: sourceAgent.name,
description: sourceAgent.description || undefined,
emoji: sourceAgent.emojiIcon || undefined,
instruction: sourceAgent.instruction || undefined,
// Settings
creativity: sourceAgent.temperature,
inputType: sourceAgent.inputType,
conversationStarters: sourceAgent.conversationStarters || [],
// Capabilities
webSearch: sourceAgent.webSearchEnabled,
imageGeneration: sourceAgent.imageGenerationEnabled,
dataAnalyst: sourceAgent.codeInterpreterEnabled,
canvas: sourceAgent.canvasEnabled,
// Input fields (for STRUCTURED input type)
inputFields: sourceAgent.inputFields || [],
// Note: actions and attachments require special handling (see below)
};
}
```
### Field Mapping Reference
| Get API Response Field | Create API Request Field |
| ------------------------ | ------------------------ |
| `name` | `name` |
| `description` | `description` |
| `emojiIcon` | `emoji` |
| `instruction` | `instruction` |
| `temperature` | `creativity` |
| `inputType` | `inputType` |
| `conversationStarters` | `conversationStarters` |
| `webSearchEnabled` | `webSearch` |
| `imageGenerationEnabled` | `imageGeneration` |
| `codeInterpreterEnabled` | `dataAnalyst` |
| `canvasEnabled` | `canvas` |
| `inputFields` | `inputFields` |
| `actions` | `actions` |
| `attachments` | `attachments` |
## Step 3: Create the Agent in Target Workspace
Use the **Agent Create API** to create the agent in the target workspace:
```javascript theme={null}
async function createAgentInTarget(agentConfig, targetApiKey) {
const response = await axios.post(
"https://api.langdock.com/agent/v1/create",
agentConfig,
{
headers: {
Authorization: `Bearer ${targetApiKey}`,
"Content-Type": "application/json"
}
}
);
return response.data.agent;
}
```
## Complete Migration Script
Here's a complete script that combines all steps:
```javascript theme={null}
const axios = require("axios");
// Configuration
const SOURCE_API_KEY = "your-source-workspace-api-key";
const TARGET_API_KEY = "your-target-workspace-api-key";
const AGENT_ID_TO_MIGRATE = "550e8400-e29b-41d4-a716-446655440000";
async function migrateAgent() {
try {
// Step 1: Get agent from source workspace
console.log("Fetching agent from source workspace...");
const getResponse = await axios.get(
"https://api.langdock.com/agent/v1/get",
{
params: { agentId: AGENT_ID_TO_MIGRATE },
headers: { Authorization: `Bearer ${SOURCE_API_KEY}` }
}
);
const sourceAgent = getResponse.data.agent;
console.log(`Found agent: "${sourceAgent.name}"`);
// Step 2: Transform configuration for Create API
const createConfig = {
name: sourceAgent.name,
description: sourceAgent.description || undefined,
emoji: sourceAgent.emojiIcon || undefined,
instruction: sourceAgent.instruction || undefined,
creativity: sourceAgent.temperature,
inputType: sourceAgent.inputType,
conversationStarters: sourceAgent.conversationStarters || [],
webSearch: sourceAgent.webSearchEnabled,
imageGeneration: sourceAgent.imageGenerationEnabled,
dataAnalyst: sourceAgent.codeInterpreterEnabled,
canvas: sourceAgent.canvasEnabled,
inputFields: sourceAgent.inputFields || [],
// Note: actions and attachments excluded - see "Handling Special Cases"
};
// Remove undefined values
Object.keys(createConfig).forEach(key => {
if (createConfig[key] === undefined) {
delete createConfig[key];
}
});
// Step 3: Create agent in target workspace
console.log("Creating agent in target workspace...");
const createResponse = await axios.post(
"https://api.langdock.com/agent/v1/create",
createConfig,
{
headers: {
Authorization: `Bearer ${TARGET_API_KEY}`,
"Content-Type": "application/json"
}
}
);
const newAgent = createResponse.data.agent;
console.log(`Migration successful!`);
console.log(`New agent ID: ${newAgent.id}`);
console.log(`Agent name: ${newAgent.name}`);
return newAgent;
} catch (error) {
if (error.response) {
console.error(`Error ${error.response.status}: ${JSON.stringify(error.response.data)}`);
} else {
console.error("Error:", error.message);
}
throw error;
}
}
migrateAgent();
```
## Handling Special Cases
### Actions (Custom Integrations)
Actions reference integrations that must be enabled in the target workspace. Action UUIDs are specific to each workspace's integration setup.
Exclude actions from the initial migration and manually configure them in the target workspace after the agent is created.
### Attachments
Attachment UUIDs reference files stored in the source workspace. These files are not automatically transferred.
**To migrate attachments**:
1. Download the files from the source workspace
2. Re-upload them to the target workspace using the [Upload Attachment API](/api-endpoints/agent/upload-attachments)
3. Update the agent with the new attachment UUIDs
### OAuth Connections
Pre-selected OAuth connections are **not supported via the API**. Users must configure OAuth connections through the Langdock UI after migration.
## Migrating Multiple Agents
To migrate multiple agents, simply loop through a list of agent IDs:
```javascript theme={null}
const AGENT_IDS = [
"agent-uuid-1",
"agent-uuid-2",
"agent-uuid-3"
];
async function migrateMultipleAgents() {
const results = [];
for (const agentId of AGENT_IDS) {
try {
console.log(`\nMigrating agent: ${agentId}`);
const newAgent = await migrateAgent(agentId);
results.push({
sourceId: agentId,
targetId: newAgent.id,
status: "success"
});
} catch (error) {
results.push({
sourceId: agentId,
status: "failed",
error: error.message
});
}
}
console.log("\n=== Migration Summary ===");
console.table(results);
}
```
## Post-Migration Checklist
After migrating an agent, verify the following in the target workspace:
* Agent appears in the Agents list with correct name and emoji
* Description and instructions are correctly transferred
* Conversation starters are present
* Capabilities (web search, image generation, etc.) are correctly enabled
* Input fields are properly configured (for STRUCTURED input type)
* Manually configure any OAuth connections through the UI
* Re-upload and attach any necessary files
* Configure custom actions/integrations if needed
* Test the agent by sending a message
## Limitations
Keep these limitations in mind when planning your migration:
1. **Attachments are not transferred** - Files must be re-uploaded to the target workspace
2. **Actions may need reconfiguration** - Integration action UUIDs are workspace-specific
3. **OAuth connections require manual setup** - Cannot be configured via API
4. **Models require manual adjustment** - The agent will use the workspace default model; adjust manually in the UI after migration
5. **Conversation history is not migrated** - Only the agent configuration is transferred
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Models for Agent API
Source: https://docs.langdock.com/api-endpoints/agent/agent-models
GET /agent/v1/models
Retrieve all available models for use with the Agent API.
This is the new Agents API with native Vercel AI SDK compatibility. If you're using the legacy Assistants API, see the [migration guide](/api-endpoints/assistant/assistant-to-agent-migration).
Retrieve the list of models and their ids, available for use with the Agent API. This endpoint is useful when you want to see which models you can use when creating a temporary agent.
## Example Request
```javascript theme={null}
const axios = require("axios");
async function getAvailableModels() {
try {
const response = await axios.get("https://api.langdock.com/agent/v1/models", {
headers: {
Authorization: "Bearer YOUR_API_KEY",
},
});
console.log("Available models:", response.data.data);
} catch (error) {
console.error("Error fetching models:", error);
}
}
```
## Response Format
The API returns a list of available models in the following format:
### Response Fields
Always 'list', indicating the top-level JSON object type.
Array containing available model objects.
Unique identifier of the model (e.g., gpt-5).
Always 'model', indicating the object type.
Unix timestamp (ms) when the model was created.
Owner of the model (currently always 'system').
```json Example response theme={null}
{
"object": "list",
"data": [
{
"id": "gpt-5",
"object": "model",
"created": 1686935735000,
"owned_by": "system"
}
// …other models
]
}
```
## Error Handling
```javascript theme={null}
try {
const response = await axios.get("https://api.langdock.com/agent/v1/models", {
headers: {
Authorization: "Bearer YOUR_API_KEY",
},
});
} catch (error) {
if (error.response) {
switch (error.response.status) {
case 400:
console.error("Invalid request parameters");
break;
case 500:
console.error("Internal server error");
break;
}
}
}
```
You can use any of these model IDs when creating a temporary agent through the Agent API. Simply specify the model ID in the `model` field of your agent configuration:
```javascript agent.js theme={null}
const response = await axios.post("https://api.langdock.com/agent/v1/chat/completions", {
agent: {
name: "Custom Agent",
instructions: "You are a helpful agent",
model: "gpt-5", // Specify the model ID here
},
messages: [
{ role: "user", content: "Hello!" },
],
});
```
```bash agent.sh theme={null}
curl https://api.langdock.com/agent/v1/chat/completions \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"agent": {
"name": "Custom Agent",
"instructions": "You are a helpful agent",
"model": "gpt-5"
},
"messages": [
{ "role": "user", "content": "Hello!" }
]
}'
```
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Agent Update API
Source: https://docs.langdock.com/api-endpoints/agent/agent-update
PATCH /agent/v1/update
Update an existing agent programmatically
This is the new Agents API with native Vercel AI SDK compatibility. If you're using the legacy Assistants API, see the [migration guide](/api-endpoints/assistant/assistant-to-agent-migration).
Updates an existing agent in your workspace. Only the fields you provide will be updated, allowing for partial updates without affecting other configuration.
Requires an API key with the `AGENT_API` scope and access to the agent you want to update.
## Update Behavior
The update endpoint uses partial update semantics with specific behavior for different field types:
* **Partial updates** - Only fields provided in the request are updated; omitted fields remain unchanged
* **Array fields replace** - `actions`, `inputFields`, `conversationStarters`, and `attachments` completely replace existing values when provided
* **Empty arrays** - Send `[]` to remove all actions/fields/attachments
* **Null handling** - Send `null` for `emoji`, `description`, or `instruction` to clear them
* **Unchanged fields** - Fields not included in the request retain their current values
## Request Parameters
All fields are optional except `agentId`:
| Parameter | Type | Required | Description |
| ---------------------- | --------- | -------- | ------------------------------------------------------ |
| `agentId` | string | Yes | UUID of the agent to update |
| `name` | string | No | Updated name (1-255 characters) |
| `description` | string | No | Updated description (max 256 chars, null to clear) |
| `emoji` | string | No | Updated emoji icon (null to clear) |
| `instruction` | string | No | Updated system prompt (max 16384 chars, null to clear) |
| `model` | string | No | Updated model UUID |
| `creativity` | number | No | Updated temperature between 0-1 |
| `conversationStarters` | string\[] | No | Updated array of suggested prompts (replaces existing) |
| `actions` | array | No | Updated array of actions (replaces existing) |
| `inputFields` | array | No | Updated array of form fields (replaces existing) |
| `attachments` | string\[] | No | Updated array of attachment UUIDs (replaces existing) |
| `webSearch` | boolean | No | Updated web search capability setting |
| `imageGeneration` | boolean | No | Updated image generation capability setting |
| `dataAnalyst` | boolean | No | Updated code interpreter capability setting |
| `canvas` | boolean | No | Updated canvas capability setting |
Array fields (`actions`, `inputFields`, `conversationStarters`, `attachments`) are **replaced entirely**, not merged. Always provide the complete desired array, including any existing items you want to keep.
### Actions Configuration
Each action in the `actions` array should contain:
* `actionId` (required) - UUID of the action from an enabled integration
* `requiresConfirmation` (optional) - Whether to require user confirmation before executing
### Input Fields Configuration
For `inputFields` array structure, see the [Create Agent API](/api-endpoints/agent/agent-create) documentation.
## Examples
### Updating Basic Properties
```javascript theme={null}
const axios = require("axios");
async function updateAgentName() {
const response = await axios.patch(
"https://api.langdock.com/agent/v1/update",
{
agentId: "550e8400-e29b-41d4-a716-446655440000",
name: "Advanced Document Analyzer",
description: "Analyzes documents with enhanced capabilities",
creativity: 0.7
},
{
headers: {
Authorization: "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
}
}
);
console.log("Agent updated:", response.data.message);
}
```
## Validation Rules
The API enforces several validation rules:
* **Agent access** - Your API key must have access to the agent
* **Workspace match** - Agent must belong to the same workspace as your API key
* **Model** - If provided, must be in your workspace's active models list
* **Actions** - If provided, must belong to integrations enabled in your workspace
* **Attachments** - If provided, must exist in your workspace and not be deleted
* **Name** - If provided, must be between 1-255 characters
* **Description** - If provided, maximum 256 characters
* **Instruction** - If provided, maximum 16384 characters
* **Creativity** - If provided, must be between 0 and 1
## Response Format
### Success Response (200 OK)
```typescript theme={null}
{
status: "success";
message: "Agent updated successfully";
agent: {
id: string;
name: string;
description: string;
instruction: string;
emojiIcon: string;
model: string;
temperature: number;
conversationStarters: string[];
inputType: "PROMPT" | "STRUCTURED";
webSearchEnabled: boolean;
imageGenerationEnabled: boolean;
codeInterpreterEnabled: boolean;
canvasEnabled: boolean;
actions: Array<{
actionId: string;
requiresConfirmation: boolean;
}>;
inputFields: Array<{
slug: string;
type: string;
label: string;
description: string;
required: boolean;
order: number;
options: string[];
fileTypes: string[] | null;
}>;
attachments: string[];
createdAt: string;
updatedAt: string;
};
}
```
## Error Handling
```typescript theme={null}
try {
const response = await axios.patch('https://api.langdock.com/agent/v1/update', ...);
} catch (error) {
if (error.response) {
switch (error.response.status) {
case 400:
console.error('Invalid parameters:', error.response.data.message);
break;
case 401:
console.error('Invalid or missing API key');
break;
case 403:
console.error('Insufficient permissions - no access to this agent');
break;
case 404:
console.error('Agent not found or resource not found (model, action, attachment)');
break;
case 500:
console.error('Server error');
break;
}
}
}
```
## Best Practices
**Preserving existing values**: When updating array fields like `actions` or `attachments`, always include existing items you want to keep, as the entire array is replaced.
1. **Fetch before update** - If you need to preserve existing array values, fetch the current agent configuration first
2. **Incremental updates** - Update only the fields that need to change
3. **Validate attachments** - Ensure attachment UUIDs are valid before including them
4. **Test actions** - Verify actions belong to enabled integrations before updating
5. **Handle errors gracefully** - Implement proper error handling for validation failures
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Upload Attachment API
Source: https://docs.langdock.com/api-endpoints/agent/upload-attachments
POST /attachment/v1/upload
Upload files to be used with Agents
This is the new Agents API with native Vercel AI SDK compatibility. The upload attachment endpoint is shared across both APIs. If you're using the legacy Assistants API, see the [migration guide](/api-endpoints/assistant/assistant-to-agent-migration).
Upload files that can be referenced in Agent conversations using their attachment IDs.
To use the API you need an API key. You can create API Keys in your [Workspace
settings](https://app.langdock.com/settings/workspace/products/api).
## Request Format
This endpoint accepts `multipart/form-data` requests with a single file upload.
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | --------------------------- |
| `file` | File | Yes | The file you want to upload |
## Response Format
The API returns the uploaded file information:
```typescript theme={null}
{
attachmentId: string;
file: {
name: string;
mimeType: string;
sizeInBytes: number;
}
}
```
## Example
```javascript theme={null}
const axios = require("axios");
const FormData = require("form-data");
const fs = require("fs");
async function uploadAttachment() {
const form = new FormData();
form.append("file", fs.createReadStream("example.pdf"));
const response = await axios.post(
"https://api.langdock.com/attachment/v1/upload",
form,
{
headers: {
...form.getHeaders(),
Authorization: "Bearer YOUR_API_KEY",
},
}
);
console.log(response.data);
// {
// attachmentId: "550e8400-e29b-41d4-a716-446655440000",
// file: {
// name: "example.pdf",
// mimeType: "application/pdf",
// sizeInBytes: 1234567
// }
// }
}
```
## Error Handling
```javascript theme={null}
try {
const response = await axios.post('https://api.langdock.com/attachment/v1/upload', ...);
} catch (error) {
if (error.response) {
switch (error.response.status) {
case 400:
console.error('No file provided');
break;
case 401:
console.error('Invalid API key');
break;
case 500:
console.error('Server error');
break;
}
}
}
```
The uploaded attachment ID can be used in the Agent API by including it in the `attachmentIds` array either at the agent level or message level.
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# API Introduction
Source: https://docs.langdock.com/api-endpoints/api-introduction
Integrate Langdock's powerful AI capabilities into your applications with our comprehensive API suite
## Overview
The Langdock API provides programmatic access to state-of-the-art AI models while maintaining enterprise-grade security and compliance. Whether you're building custom applications, automating workflows, or enhancing existing systems, our API offers the flexibility and power you need.
## Available APIs
### Completion API
Access leading language models from OpenAI, Anthropic, Mistral, and Google for text generation, analysis, and reasoning tasks.
* [OpenAI API](/api-endpoints/completion/openai) - GPT-5 and GPT-4.1 models
* [Anthropic API](/api-endpoints/completion/anthropic) - Claude models
* [Mistral API](/api-endpoints/completion/mistral) - Mistral models
### Embedding API
Generate high-quality embeddings for semantic search, similarity matching, and RAG applications.
* [OpenAI Embeddings](/api-endpoints/embedding/openai-embedding) - Text embeddings for various use cases
### Agent API
Create and manage custom AI agents programmatically with specialized knowledge and capabilities.
* [Agent API Guide](/api-endpoints/agent/agent-api-guide) - Complete guide to using agents
* [Managing Agents](/api-endpoints/agent/agent) - Create and configure agents
* [Agent Models](/api-endpoints/agent/agent-models) - Available models for agents
* [Upload Attachments](/api-endpoints/agent/upload-attachments) - Add documents to agents
### Knowledge Folder API
Manage your organization's knowledge base programmatically for RAG and document processing.
* [Sharing](/api-endpoints/knowledge-folder/sharing) - Share knowledge folders
* [Upload Files](/api-endpoints/knowledge-folder/upload-file) - Add documents
* [Search](/api-endpoints/knowledge-folder/search-knowledge-folder) - Search within folders
## Getting Started
### Authentication
All API requests require authentication using a Bearer token:
```bash theme={null}
curl -H "Authorization: Bearer YOUR_API_KEY" \
https://api.langdock.com/v1/completions
```
**Security Note:**
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. Calling the API from the browser (e.g., from JavaScript running in a web page) exposes the Langdock API Key publicly and creates a security risk. Therefore, the Langdock API must be accessed from a secure backend environment. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
### Rate Limits
API requests are subject to rate limiting to ensure fair usage and system stability. Default limits are:
* API requests per minute 500 requests/min
* API tokens per minute 60.000 tokens/min
## Key Features
All API operations maintain full GDPR compliance with data processing in EU regions
Access models from OpenAI, Anthropic, Mistral, and more through a unified interface
SOC 2 certified infrastructure with end-to-end encryption and audit logging
Bring Your Own Key (BYOK) option for enhanced control over API access
## Use Cases
* **Custom Applications**: Build AI-powered features into your applications
* **Workflow Automation**: Automate document processing and analysis
* **Data Analysis**: Extract insights from large volumes of text data
* **Content Generation**: Create high-quality content at scale
* **Semantic Search**: Implement intelligent search across your knowledge base
## Support
For API support and questions reach out to us at [support@langdock.com](mailto:support@langdock.com).
# Assistants Completions API
Source: https://docs.langdock.com/api-endpoints/assistant/assistant
POST /assistant/v1/chat/completions
Creates a model response for a given Assistant.
**The Assistants API will be deprecated in a future release.**
For new projects, we recommend using the [Agents API](/api-endpoints/agent/agent). The Agents API provides native Vercel AI SDK compatibility and removes custom transformations.
See the [migration guide](/api-endpoints/assistant/assistant-to-agent-migration) to learn about the differences.
Creates a model response for a given assistant id, or pass in an Assistant configuration that should be used for your request.
To share an assistant with an API key, follow [this guide](/api-endpoints/assistant/assistant-api-guide)
## Base URL
```
https://api.langdock.com/assistant/v1/chat/completions
```
For dedicated deployments, use `https:///api/public/assistant/v1/chat/completions` instead.
## Request Parameters
| Parameter | Type | Required | Description |
| ------------- | ------- | ------------------------------------- | ---------------------------------------------- |
| `assistantId` | string | One of assistantId/assistant required | ID of an existing assistant to use |
| `assistant` | object | One of assistantId/assistant required | Configuration for a new assistant |
| `messages` | array | Yes | Array of message objects with role and content |
| `stream` | boolean | No | Enable streaming responses (default: false) |
| `output` | object | No | Structured output format specification |
### Message Format
Each message in the `messages` array should contain:
* `role` (required) - One of: "user", "assistant", or "tool"
* `content` (required) - The message content as a string
* `attachmentIds` (optional) - Array of UUID strings identifying attachments for this message
### Agent Configuration
When creating a temporary assistant, you can specify:
* `name` (required) - Name of the assistant (max 64 chars)
* `instructions` (required) - System instructions (max 16384 chars)
* `description` - Optional description (max 256 chars)
* `temperature` - Temperature between 0-1
* `model` - Model ID to use (see [Available Models](/api-endpoints/assistant/assistant-models) for options)
* `capabilities` - Enable features like web search, data analysis, image generation
* `actions` - Custom API integrations
* `vectorDb` - Vector database connections
* `knowledgeFolderIds` - IDs of knowledge folders to use
* `attachmentIds` - Array of UUID strings identifying attachments to use
You can retrieve a list of available models using the [Models
API](/api-endpoints/assistant/assistant-models). This is useful when you want to see which models you can use in your assistant configuration.
## Using Tools via API
When an assistant has tools configured (called "Actions" in the Langdock UI), it will automatically use them to respond to API requests when appropriate.
The connection must be set to "preselected connection" (shared with other users) for tool authentication to work.
Tools with **"Require human confirmation"** enabled do not work via API—they require manual approval in the Langdock UI. To use a tool via API, disable this setting in the assistant configuration.
## Structured Output
You can specify a structured output format using the optional `output` parameter:
| Field | Type | Description |
| -------- | ----------------------------- | -------------------------------------------------------------- |
| `type` | "object" \| "array" \| "enum" | The type of structured output |
| `schema` | object | JSON Schema definition for the output (for object/array types) |
| `enum` | string\[] | Array of allowed values (for enum type) |
The `output` parameter behavior depends on the specified type:
* `type: "object"` with no schema: Forces the response to be a single JSON object (no specific structure)
* `type: "object"` with schema: Forces the response to match the provided JSON Schema
* `type: "array"` with schema: Forces the response to be an array of objects matching the provided schema
* `type: "enum"`: Forces the response to be one of the values specified in the `enum` array
You can use tools like [easy-json-schema](https://easy-json-schema.github.io/) to generate JSON Schemas from example JSON objects.
## Streaming Responses
When `stream` is set to `true`, the API will return a stream of server-sent events (SSE) instead of waiting for the complete response. This allows you to display responses to users progressively as they are generated.
### Stream Format
Each event in the stream follows the SSE format with JSON data:
```
data: {"type":"message","content":"Hello"}
data: {"type":"message","content":" world"}
data: {"type":"done"}
```
### Handling Streams in JavaScript
```javascript theme={null}
const response = await fetch('https://api.langdock.com/assistant/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json',
},
body: JSON.stringify({
assistantId: 'asst_123',
messages: [{ role: 'user', content: 'Hello' }],
stream: true
}),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = JSON.parse(line.slice(6));
if (data.type === 'message') {
process.stdout.write(data.content);
}
}
}
}
```
## Obtaining Attachment IDs
To use attachments in your assistant conversations, you first need to upload the files using the [Upload Attachment API](/api-endpoints/assistant/upload-attachments). This will return an `attachmentId` for each file, which you can then include in the `attachmentIds` array in your assistant or message configuration.
## Examples
### Using an Existing Agent
```javascript theme={null}
const axios = require("axios");
async function chatWithAssistant() {
const response = await axios.post(
"https://api.langdock.com/assistant/v1/chat/completions",
{
assistantId: "asst_123",
messages: [
{
role: "user",
content: "Can you analyze this document for me?",
attachmentIds: ["550e8400-e29b-41d4-a716-446655440000"], // Obtain attachmentIds from upload attachment endpoint
},
],
stream: true, // Enable streaming responses
},
{
headers: {
Authorization: "Bearer YOUR_API_KEY",
},
}
);
console.log(response.data.result);
}
```
### Using a temporary Agent configuration
```javascript theme={null}
const axios = require("axios");
async function chatWithNewAssistant() {
const response = await axios.post(
"https://api.langdock.com/assistant/v1/chat/completions",
{
assistant: {
name: "Document Analyzer",
instructions:
"You are a helpful assistant who analyzes documents and answers questions about them",
temperature: 0.7,
model: "gpt-4",
capabilities: {
webSearch: true,
dataAnalyst: true,
},
attachmentIds: ["550e8400-e29b-41d4-a716-446655440000"], // Obtain attachmentIds from upload attachment endpoint
},
messages: [
{
role: "user",
content: "What are the key points in the document?",
},
],
},
{
headers: {
Authorization: "Bearer YOUR_API_KEY",
},
}
);
console.log(response.data.result);
}
```
### Using Structured Output with Schema
```javascript theme={null}
const axios = require("axios");
async function getStructuredWeather() {
const response = await axios.post(
"https://api.langdock.com/assistant/v1/chat/completions",
{
assistant: {
name: "Weather Agent",
instructions: "You are a helpful weather assistant",
model: "gpt-5.1",
capabilities: {
webSearch: true,
},
},
messages: [
{
role: "user",
content: "What's the weather in paris, berlin and london today?",
},
],
output: {
type: "array",
schema: {
type: "object",
properties: {
weather: {
type: "object",
properties: {
city: {
type: "string",
},
tempInCelsius: {
type: "number",
},
tempInFahrenheit: {
type: "number",
},
},
required: ["city", "tempInCelsius", "tempInFahrenheit"],
},
},
},
},
},
{
headers: {
Authorization: "Bearer YOUR_API_KEY",
},
}
);
// Access the structured data directly from output
console.log(response.data.output);
// Output:
// [
// { "weather": { "city": "Paris", "tempInCelsius": 1, "tempInFahrenheit": 33 } },
// { "weather": { "city": "Berlin", "tempInCelsius": 1, "tempInFahrenheit": 35 } },
// { "weather": { "city": "London", "tempInCelsius": 7, "tempInFahrenheit": 45 } }
// ]
}
```
### Using Structured Output with Object
```javascript theme={null}
const axios = require("axios");
async function extractContactInfo() {
const response = await axios.post(
"https://api.langdock.com/assistant/v1/chat/completions",
{
assistant: {
name: "Contact Extractor",
instructions: "You extract contact information from text",
},
messages: [
{
role: "user",
content:
"Extract the contact info: John Smith is our new sales lead. You can reach him at john.smith@example.com or call +1-555-123-4567.",
},
],
output: {
type: "object",
schema: {
type: "object",
properties: {
name: {
type: "string",
},
email: {
type: "string",
},
phone: {
type: "string",
},
role: {
type: "string",
},
},
required: ["name", "email"],
},
},
},
{
headers: {
Authorization: "Bearer YOUR_API_KEY",
},
}
);
// Access the structured data directly from output
console.log(response.data.output);
// Output:
// {
// "name": "John Smith",
// "email": "john.smith@example.com",
// "phone": "+1-555-123-4567",
// "role": "sales lead"
// }
}
```
### Using Structured Output with Enum
```javascript theme={null}
const axios = require("axios");
async function getSentimentAnalysis() {
const response = await axios.post(
"https://api.langdock.com/assistant/v1/chat/completions",
{
assistant: {
name: "Sentiment Analyzer",
instructions: "You analyze the sentiment of text",
},
messages: [
{
role: "user",
content:
"How would you rate this review: 'This product exceeded my expectations!'",
},
],
output: {
type: "enum",
enum: ["positive", "neutral", "negative"],
},
},
{
headers: {
Authorization: "Bearer YOUR_API_KEY",
},
}
);
// Access the enum result directly from output
console.log(response.data.output);
// Output: "positive"
}
```
## Rate limits
The rate limit for the Agent Completion endpoint is **500 RPM (requests per minute)** and **60.000 TPM (tokens per minute)**. Rate limits are defined at the workspace level - and not at an API key level. Each model has its own rate limit. If you exceed your rate limit, you will receive a `429 Too Many Requests` response.
Please note that the rate limits are subject to change, refer to this documentation for the most up-to-date information. In case you need a higher rate limit, please contact us at [support@langdock.com](mailto:support@langdock.com).
## Response Format
The API returns an object containing:
```typescript theme={null}
{
// Standard message results - always present
result: Array<{
id: string;
role: "tool" | "assistant";
content: Array<{
type: string;
toolCallId?: string;
toolName?: string;
result?: object;
args?: object;
text?: string;
}>;
}>;
// Structured output - included by default
output?: object | array | string;
}
```
### Standard Result
The `result` array contains the message exchange between user and assistant, including any tool calls that were made. This is always present in the response.
### Structured Output
When the request includes an `output` parameter, the response will automatically include an `output` field containing the formatted structured data. The type of this field depends on the requested output format:
* If `output.type` was "object": Returns a JSON object (with schema validation if schema was provided)
* If `output.type` was "array": Returns an array of objects matching the provided schema
* If `output.type` was "enum": Returns a string matching one of the provided enum values
For example, when requesting weather data with structured output:
```javascript theme={null}
// Request
{
"output": {
"type": "array",
"schema": {
"type": "object",
"properties": {
"weather": {
"type": "object",
"properties": {
"city": { "type": "string" },
"tempInCelsius": { "type": "number" },
"tempInFahrenheit": { "type": "number" }
},
"required": ["city", "tempInCelsius", "tempInFahrenheit"]
}
}
}
}
}
// Response
{
"result": [
// Full conversation including tool calls (e.g., web searches)
{ "role": "assistant", "content": [...], "id": "..." },
{ "role": "tool", "content": [...], "id": "..." },
{ "role": "assistant", "content": "...", "id": "..." }
],
"output": [
{ "weather": { "city": "Paris", "tempInCelsius": 1, "tempInFahrenheit": 33 } },
{ "weather": { "city": "Berlin", "tempInCelsius": 1, "tempInFahrenheit": 35 } },
{ "weather": { "city": "London", "tempInCelsius": 7, "tempInFahrenheit": 45 } }
]
}
```
The `output` field is automatically populated with the formatted results based on the assistant's response and your schema definition. You can use this directly in your application without parsing the full conversation in `result`.
## Error Handling
```javascript theme={null}
try {
const response = await axios.post('https://api.langdock.com/assistant/v1/chat/completions', ...);
} catch (error) {
if (error.response) {
switch (error.response.status) {
case 400:
console.error('Invalid parameters:', error.response.data.message);
break;
case 429:
console.error('Rate limit exceeded');
break;
case 500:
console.error('Server error');
break;
}
}
}
```
## Migrating to Agents API
The new Agents API offers improved compatibility with modern AI SDKs, including native support for the Vercel AI SDK. The main difference is in the chat completions endpoint format.
See the equivalent endpoint in the Agents API:
* [Agents Completions API](/api-endpoints/agent/agent) - Uses Vercel AI SDK message format
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Sharing Assistants with API Keys
Source: https://docs.langdock.com/api-endpoints/assistant/assistant-api-guide
Learn how to create an API key in Langdock and share an assistant with it for programmatic access.
**The Assistants API will be deprecated in a future release.**
For new projects, we recommend using the [Agents API](/api-endpoints/agent/agent). The Agents API provides native Vercel AI SDK compatibility and removes custom transformations.
See the [migration guide](/api-endpoints/assistant/assistant-to-agent-migration) to learn about the differences.
An admin needs to create the API key and share the assistant with it. If you're not an admin, invite one as an editor to your assistant using the "Share" button.
## How to create an API key
1. Navigate to [Langdock](https://app.langdock.com) and open the workspace settings from the dropdown menu.
2. Click on **API** under Products in the sidebar.
3. Click **Create API key**, enter a name, select the required scopes (at minimum "Agent API"), and confirm.
4. Copy your API key and store it securely. You won't be able to view it again.
## How to share an assistant with the API key
1. Navigate to **Agents** in the sidebar.
2. Create a new assistant or select an existing one. Enter at least a name to save it.
3. In the assistant editor, click the **Share** button in the top right corner.
4. The share dialog opens showing current access settings.
5. Search for your API key by name and add it to share the assistant with the API.
Only admins can connect an assistant with an API key. If you don't see API keys in the share menu, ask an admin to perform this step.
## Testing the API connection
Once shared, you can test your assistant via the [Assistant API documentation](/api-endpoints/assistant/assistant). Use your API key and the assistant ID from the URL (`https://app.langdock.com/assistants/ASSISTANT_ID/edit`).
## Migrating to Agents API
For new projects, we recommend using the Agents API instead:
* [Agent API Guide](/api-endpoints/agent/agent-api-guide) - Setup guide for the Agents API
* [Full migration guide](/api-endpoints/assistant/assistant-to-agent-migration) - Learn about the differences
Langdock blocks browser-origin requests to protect your API key. For more information, see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Assistant Create API
Source: https://docs.langdock.com/api-endpoints/assistant/assistant-create
POST /assistant/v1/create
Create a new Assistant programmatically
**The Assistants API will be deprecated in a future release.**
For new projects, we recommend using the [Agents API](/api-endpoints/agent/agent-create). The Agents API provides native Vercel AI SDK compatibility and removes custom transformations.
See the [migration guide](/api-endpoints/assistant/assistant-to-agent-migration) to learn about the differences.
Creates a new Assistant in your workspace programmatically. The created assistant can be used via the chat completions endpoint or accessed through the Langdock UI.
Requires an API key with the `AGENT_API` scope. Created Assistants are automatically shared with the API key for use in chat completions.
## Request Parameters
| Parameter | Type | Required | Description |
| ---------------------- | --------- | -------- | -------------------------------------------------------------- |
| `name` | string | Yes | Name of the Assistant (1-255 characters) |
| `description` | string | No | Description of what the Assistant does (max 256 chars) |
| `emoji` | string | No | Emoji icon for the Assistant (e.g., "🤖") |
| `instruction` | string | No | System prompt/instructions for the Assistant (max 16384 chars) |
| `inputType` | string | No | Input type: "PROMPT" or "STRUCTURED" (default: "PROMPT") |
| `model` | string | No | Model UUID to use (uses workspace default if not provided) |
| `creativity` | number | No | Temperature between 0-1 (default: 0.3) |
| `conversationStarters` | string\[] | No | Array of suggested prompts to help users get started |
| `actions` | array | No | Array of action objects for custom integrations |
| `inputFields` | array | No | Array of form field definitions (for STRUCTURED input type) |
| `attachments` | string\[] | No | Array of attachment UUIDs to include with the Assistant |
| `webSearch` | boolean | No | Enable web search capability (default: false) |
| `imageGeneration` | boolean | No | Enable image generation capability (default: false) |
| `dataAnalyst` | boolean | No | Enable code interpreter capability (default: false) |
| `canvas` | boolean | No | Enable canvas capability (default: false) |
### Actions Configuration
Each action in the `actions` array should contain:
* `actionId` (required) - UUID of the action from an enabled integration
* `requiresConfirmation` (optional) - Whether to require user confirmation before executing (default: false)
Only actions from integrations enabled in your workspace can be used.
### Input Fields Configuration
When using `inputType: "STRUCTURED"`, you can define form fields in the `inputFields` array:
| Field | Type | Required | Description |
| ------------- | --------- | -------- | ---------------------------------------------- |
| `slug` | string | Yes | Unique identifier for the field |
| `type` | string | Yes | Field type (see supported types below) |
| `label` | string | Yes | Display label for the field |
| `description` | string | No | Help text for the field |
| `required` | boolean | No | Whether the field is required (default: false) |
| `order` | number | Yes | Display order (0-indexed) |
| `options` | string\[] | No | Options for SELECT type fields |
| `fileTypes` | string\[] | No | Allowed file types for FILE type fields |
**Supported Field Types:**
* `TEXT` - Single line text input
* `MULTI_LINE_TEXT` - Multi-line text area
* `NUMBER` - Numeric input
* `CHECKBOX` - Boolean checkbox
* `FILE` - File upload
* `SELECT` - Dropdown selection
* `DATE` - Date picker
## Obtaining Attachment IDs
To include attachments with your Assistant, first upload files using the [Upload Attachment API](/api-endpoints/assistant/upload-attachments). This will return attachment UUIDs that you can include in the `attachments` array.
## Examples
### Creating a Basic Assistant
```javascript theme={null}
const axios = require("axios");
async function createBasicAssistant() {
const response = await axios.post(
"https://api.langdock.com/assistant/v1/create",
{
name: "Document Analyzer",
description: "Analyzes and summarizes documents",
emoji: "📄",
instruction: "You are a helpful Assistant that analyzes documents and provides clear summaries of key information.",
creativity: 0.5,
conversationStarters: [
"Summarize this document",
"What are the key points?",
"Extract action items"
],
dataAnalyst: true,
webSearch: false
},
{
headers: {
Authorization: "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
}
}
);
console.log("Assistant created:", response.data.assistant.id);
}
```
## Validation Rules
The API enforces several validation rules:
* **Model** - Must be in your workspace's active models list
* **Actions** - Must belong to integrations enabled in your workspace
* **Attachments** - Must exist in your workspace and not be deleted
* **Permissions** - Your API key must have the `createAssistants` permission
* **Name** - Must be between 1-255 characters
* **Description** - Maximum 256 characters
* **Instruction** - Maximum 16384 characters
* **Creativity** - Must be between 0 and 1
## Important Notes
Pre-selected OAuth connections are not supported via the API. Users must configure OAuth connections through the Langdock UI.
* Created Assistants are automatically shared with your API key for use in chat completions
* The API key creator becomes the owner and can manage the Assistant in the UI
* Attachments are bidirectionally linked to the Assistant
* The Assistant type is set to `AGENT` (not `WORKFLOW` or `PROJECT`)
* `createdBy` and `workspaceId` are automatically set from your API key
## Response Format
### Success Response (201 Created)
```typescript theme={null}
{
status: "success";
message: "Assistant created successfully";
assistant: {
id: string;
name: string;
description: string;
instruction: string;
emojiIcon: string;
model: string;
temperature: number;
conversationStarters: string[];
inputType: "PROMPT" | "STRUCTURED";
webSearchEnabled: boolean;
imageGenerationEnabled: boolean;
codeInterpreterEnabled: boolean;
canvasEnabled: boolean;
actions: Array<{
actionId: string;
requiresConfirmation: boolean;
}>;
inputFields: Array<{
slug: string;
type: string;
label: string;
description: string;
required: boolean;
order: number;
options: string[];
fileTypes: string[] | null;
}>;
attachments: string[];
createdAt: string;
updatedAt: string;
};
}
```
## Error Handling
```typescript theme={null}
try {
const response = await axios.post('https://api.langdock.com/assistant/v1/create', ...);
} catch (error) {
if (error.response) {
switch (error.response.status) {
case 400:
console.error('Invalid parameters:', error.response.data.message);
break;
case 401:
console.error('Invalid or missing API key');
break;
case 403:
console.error('Insufficient permissions - requires AGENT_API scope');
break;
case 404:
console.error('Resource not found (model, action, or attachment)');
break;
case 500:
console.error('Server error');
break;
}
}
}
```
## Migrating to Agents API
The new Agents API offers improved compatibility with modern AI SDKs. The create endpoint has similar functionality with updated parameter names.
See the equivalent endpoint in the Agents API:
* [Agent Create API](/api-endpoints/agent/agent-create) - Uses `agentId` instead of `assistantId`
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Assistant Get API
Source: https://docs.langdock.com/api-endpoints/assistant/assistant-get
GET /assistant/v1/get
Retrieve details of an existing Assistant
**The Assistants API will be deprecated in a future release.**
For new projects, we recommend using the [Agents API](/api-endpoints/agent/agent-get). The Agents API provides native Vercel AI SDK compatibility and removes custom transformations.
See the [migration guide](/api-endpoints/assistant/assistant-to-agent-migration) to learn about the differences.
Retrieves the complete configuration and details of an existing Assistant in your workspace.
Requires an API key with the `AGENT_API` scope and access to the Assistant you want to retrieve.
## Query Parameters
| Parameter | Type | Required | Description |
| ------------- | ------ | -------- | --------------------------------- |
| `assistantId` | string | Yes | UUID of the assistant to retrieve |
## Examples
### Basic Retrieval
```javascript theme={null}
const axios = require("axios");
async function getAssistant() {
const response = await axios.get(
"https://api.langdock.com/assistant/v1/get",
{
params: {
assistantId: "550e8400-e29b-41d4-a716-446655440000"
},
headers: {
Authorization: "Bearer YOUR_API_KEY"
}
}
);
console.log("Assistant details:", response.data.assistant);
}
```
## Validation Rules
The API enforces the following validation rules:
* **Assistant access** - Your API key must have access to the assistant
* **Workspace match** - Assistant must belong to the same workspace as your API key
## Response Format
### Success Response (200 OK)
```typescript theme={null}
{
status: "success";
assistant: {
id: string;
name: string;
description: string;
instruction: string;
emojiIcon: string;
model: string;
temperature: number;
conversationStarters: string[];
inputType: "PROMPT" | "STRUCTURED";
webSearchEnabled: boolean;
imageGenerationEnabled: boolean;
codeInterpreterEnabled: boolean;
canvasEnabled: boolean;
actions: Array<{
actionId: string;
requiresConfirmation: boolean;
}>;
inputFields: Array<{
slug: string;
type: string;
label: string;
description: string;
required: boolean;
order: number;
options: string[];
fileTypes: string[] | null;
}>;
attachments: string[];
createdAt: string;
updatedAt: string;
};
}
```
## Error Handling
```typescript theme={null}
try {
const response = await axios.get('https://api.langdock.com/assistant/v1/get', ...);
} catch (error) {
if (error.response) {
switch (error.response.status) {
case 400:
console.error('Invalid Assistant ID format');
break;
case 401:
console.error('Invalid or missing API key');
break;
case 403:
console.error('Insufficient permissions - no access to this Assistant');
break;
case 404:
console.error('Assistant not found');
break;
case 500:
console.error('Server error');
break;
}
}
}
```
## Migrating to Agents API
The new Agents API offers improved compatibility with modern AI SDKs. The get endpoint has similar functionality with updated parameter names.
See the equivalent endpoint in the Agents API:
* [Agent Get API](/api-endpoints/agent/agent-get) - Uses `agentId` instead of `assistantId`
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Assistant Migration Guide
Source: https://docs.langdock.com/api-endpoints/assistant/assistant-migration
How to migrate Assistants between Langdock workspaces using the Assistant API
**The Assistants API will be deprecated in a future release.**
For new projects, we recommend using the [Agents API](/api-endpoints/agent/agent-migration). The Agents API provides native Vercel AI SDK compatibility and removes custom transformations.
See the [migration guide](/api-endpoints/assistant/assistant-to-agent-migration) to learn about the differences.
This guide explains how to migrate Assistants from one Langdock workspace to another using the Assistant API. This is useful when you want to replicate assistant configurations across different workspaces, move assistants during organizational restructuring, or create backup copies of your assistants.
## Overview
The migration process involves two main steps:
1. **Export**: Retrieve the Assistant configuration from the source workspace using the [assistant Get API](/api-endpoints/assistant/assistant-get)
2. **Import**: Create a new Assistant in the target workspace using the [assistant Create API](/api-endpoints/assistant/assistant-create)
## Prerequisites
Before you begin, ensure you have:
1. **`Two API keys with AGENT_API scope`**:
* One API key for the **source workspace** (where the Assistant currently exists)
* One API key for the **target workspace** (where you want to migrate the Assistant)
2. **Access to the Assistant**: Your source workspace API key must have access to the assistant you want to migrate
3. **Matching resources in target workspace** (if applicable):
* The model will need to be manually adjusted in the Langdock UI after migration
* If the Assistant uses custom actions, those integrations must be enabled in the target workspace
* Attachments need to be re-uploaded separately (they are not transferred automatically)
## Step 1: Export the Assistant from Source Workspace
Use the **Assistant Get API** to retrieve the complete configuration of your assistant:
```javascript theme={null}
const axios = require("axios");
async function getAssistantFromSource(assistantId, sourceApiKey) {
const response = await axios.get(
"https://api.langdock.com/assistant/v1/get",
{
params: {
assistantId: assistantId
},
headers: {
Authorization: `Bearer ${sourceApiKey}`
}
}
);
return response.data.Assistant;
}
```
### Understanding the Response
The Get API returns the complete Assistant configuration including:
* `name`, `description`, `instruction` - The Assistant's identity and system prompt
* `emojiIcon` - The emoji icon displayed for the Assistant
* `model` - UUID of the model being used
* `temperature` - Creativity setting (0-1)
* `conversationStarters` - Suggested prompts for users
* `inputType` - Either "PROMPT" or "STRUCTURED"
* `inputFields` - Form field definitions (for STRUCTURED input type)
* `webSearchEnabled`, `imageGenerationEnabled`, `codeInterpreterEnabled`, `canvasEnabled` - Capability flags
* `actions` - Custom integration actions
* `attachments` - UUIDs of attached files
## Step 2: Transform the Configuration
The Get API response uses slightly different field names than the Create API expects. You need to map the fields:
```javascript theme={null}
function transformForCreate(sourceAssistant) {
return {
// Basic information
name: sourceAssistant.name,
description: sourceAssistant.description || undefined,
emoji: sourceAssistant.emojiIcon || undefined,
instruction: sourceAssistant.instruction || undefined,
// Settings
creativity: sourceAssistant.temperature,
inputType: sourceAssistant.inputType,
conversationStarters: sourceAssistant.conversationStarters || [],
// Capabilities
webSearch: sourceAssistant.webSearchEnabled,
imageGeneration: sourceAssistant.imageGenerationEnabled,
dataAnalyst: sourceAssistant.codeInterpreterEnabled,
canvas: sourceAssistant.canvasEnabled,
// Input fields (for STRUCTURED input type)
inputFields: sourceAssistant.inputFields || [],
// Note: actions and attachments require special handling (see below)
};
}
```
### Field Mapping Reference
| Get API Response Field | Create API Request Field |
| ------------------------ | ------------------------ |
| `name` | `name` |
| `description` | `description` |
| `emojiIcon` | `emoji` |
| `instruction` | `instruction` |
| `temperature` | `creativity` |
| `inputType` | `inputType` |
| `conversationStarters` | `conversationStarters` |
| `webSearchEnabled` | `webSearch` |
| `imageGenerationEnabled` | `imageGeneration` |
| `codeInterpreterEnabled` | `dataAnalyst` |
| `canvasEnabled` | `canvas` |
| `inputFields` | `inputFields` |
| `actions` | `actions` |
| `attachments` | `attachments` |
## Step 3: Create the Assistant in Target Workspace
Use the **Assistant Create API** to create the assistant in the target workspace:
```javascript theme={null}
async function createAssistantInTarget(assistantConfig, targetApiKey) {
const response = await axios.post(
"https://api.langdock.com/assistant/v1/create",
AssistantConfig,
{
headers: {
Authorization: `Bearer ${targetApiKey}`,
"Content-Type": "application/json"
}
}
);
return response.data.Assistant;
}
```
## Complete Migration Script
Here's a complete script that combines all steps:
```javascript theme={null}
const axios = require("axios");
// Configuration
const SOURCE_API_KEY = "your-source-workspace-api-key";
const TARGET_API_KEY = "your-target-workspace-api-key";
const AGENT_ID_TO_MIGRATE = "550e8400-e29b-41d4-a716-446655440000";
async function migrateAssistant() {
try {
// Step 1: Get Assistant from source workspace
console.log("Fetching Assistant from source workspace...");
const getResponse = await axios.get(
"https://api.langdock.com/assistant/v1/get",
{
params: { assistantId: AGENT_ID_TO_MIGRATE },
headers: { Authorization: `Bearer ${SOURCE_API_KEY}` }
}
);
const sourceAssistant = getResponse.data.assistant;
console.log(`Found Assistant: "${sourceassistant.name}"`);
// Step 2: Transform configuration for Create API
const createConfig = {
name: sourceAssistant.name,
description: sourceAssistant.description || undefined,
emoji: sourceAssistant.emojiIcon || undefined,
instruction: sourceAssistant.instruction || undefined,
creativity: sourceAssistant.temperature,
inputType: sourceAssistant.inputType,
conversationStarters: sourceAssistant.conversationStarters || [],
webSearch: sourceAssistant.webSearchEnabled,
imageGeneration: sourceAssistant.imageGenerationEnabled,
dataAnalyst: sourceAssistant.codeInterpreterEnabled,
canvas: sourceAssistant.canvasEnabled,
inputFields: sourceAssistant.inputFields || [],
// Note: actions and attachments excluded - see "Handling Special Cases"
};
// Remove undefined values
Object.keys(createConfig).forEach(key => {
if (createConfig[key] === undefined) {
delete createConfig[key];
}
});
// Step 3: Create Assistant in target workspace
console.log("Creating Assistant in target workspace...");
const createResponse = await axios.post(
"https://api.langdock.com/assistant/v1/create",
createConfig,
{
headers: {
Authorization: `Bearer ${TARGET_API_KEY}`,
"Content-Type": "application/json"
}
}
);
const newAssistant = createResponse.data.assistant;
console.log(`Migration successful!`);
console.log(`New Assistant ID: ${newassistant.id}`);
console.log(`Assistant name: ${newassistant.name}`);
return newAssistant;
} catch (error) {
if (error.response) {
console.error(`Error ${error.response.status}: ${JSON.stringify(error.response.data)}`);
} else {
console.error("Error:", error.message);
}
throw error;
}
}
migrateAssistant();
```
## Handling Special Cases
### Actions (Custom Integrations)
Actions reference integrations that must be enabled in the target workspace. Action UUIDs are specific to each workspace's integration setup.
Exclude actions from the initial migration and manually configure them in the target workspace after the Assistant is created.
### Attachments
Attachment UUIDs reference files stored in the source workspace. These files are not automatically transferred.
**To migrate attachments**:
1. Download the files from the source workspace
2. Re-upload them to the target workspace using the [Upload Attachment API](/api-endpoints/assistant/upload-attachments)
3. Update the Assistant with the new attachment UUIDs
### OAuth Connections
Pre-selected OAuth connections are **not supported via the API**. Users must configure OAuth connections through the Langdock UI after migration.
## Migrating Multiple Assistants
To migrate multiple Assistants, simply loop through a list of assistant IDs:
```javascript theme={null}
const AGENT_IDS = [
"Assistant-uuid-1",
"Assistant-uuid-2",
"Assistant-uuid-3"
];
async function migrateMultipleAssistants() {
const results = [];
for (const assistantId of AGENT_IDS) {
try {
console.log(`\nMigrating Assistant: ${assistantId}`);
const newAssistant = await migrateassistant(assistantId);
results.push({
sourceId: assistantId,
targetId: newAssistant.id,
status: "success"
});
} catch (error) {
results.push({
sourceId: assistantId,
status: "failed",
error: error.message
});
}
}
console.log("\n=== Migration Summary ===");
console.table(results);
}
```
## Post-Migration Checklist
After migrating an Assistant, verify the following in the target workspace:
* Assistant appears in the assistants list with correct name and emoji
* Description and instructions are correctly transferred
* Conversation starters are present
* Capabilities (web search, image generation, etc.) are correctly enabled
* Input fields are properly configured (for STRUCTURED input type)
* Manually configure any OAuth connections through the UI
* Re-upload and attach any necessary files
* Configure custom actions/integrations if needed
* Test the Assistant by sending a message
## Limitations
Keep these limitations in mind when planning your migration:
1. **Attachments are not transferred** - Files must be re-uploaded to the target workspace
2. **Actions may need reconfiguration** - Integration action UUIDs are workspace-specific
3. **OAuth connections require manual setup** - Cannot be configured via API
4. **Models require manual adjustment** - The Assistant will use the workspace default model; adjust manually in the UI after migration
5. **Conversation history is not migrated** - Only the Assistant configuration is transferred
## Migrating to Agents API
The new Agents API offers improved compatibility with modern AI SDKs. The migration process is similar with updated parameter names.
See the equivalent guide in the Agents API:
* [Agent Migration Guide](/api-endpoints/agent/agent-migration) - Uses `agentId` instead of `assistantId`
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Models for Assistant API
Source: https://docs.langdock.com/api-endpoints/assistant/assistant-models
GET /assistant/v1/models
Retrieve all available models for use with the Assistant API.
**The Assistants API will be deprecated in a future release.**
For new projects, we recommend using the [Agents API](/api-endpoints/agent/agent-models). The Agents API provides native Vercel AI SDK compatibility and removes custom transformations.
See the [migration guide](/api-endpoints/assistant/assistant-to-agent-migration) to learn about the differences.
Retrieve the list of models and their ids, available for use with the Assistant API. This endpoint is useful when you want to see which models you can use when creating a temporary assistant.
## Example Request
```javascript theme={null}
const axios = require("axios");
async function getAvailableModels() {
try {
const response = await axios.get("https://api.langdock.com/assistant/v1/models", {
headers: {
Authorization: "Bearer YOUR_API_KEY",
},
});
console.log("Available models:", response.data.data);
} catch (error) {
console.error("Error fetching models:", error);
}
}
```
## Response Format
The API returns a list of available models in the following format:
### Response Fields
Always 'list', indicating the top-level JSON object type.
Array containing available model objects.
Unique identifier of the model (e.g., gpt-5).
Always 'model', indicating the object type.
Unix timestamp (ms) when the model was created.
Owner of the model (currently always 'system').
```json Example response theme={null}
{
"object": "list",
"data": [
{
"id": "gpt-5",
"object": "model",
"created": 1686935735000,
"owned_by": "system"
}
// …other models
]
}
```
## Error Handling
```javascript theme={null}
try {
const response = await axios.get("https://api.langdock.com/assistant/v1/models", {
headers: {
Authorization: "Bearer YOUR_API_KEY",
},
});
} catch (error) {
if (error.response) {
switch (error.response.status) {
case 400:
console.error("Invalid request parameters");
break;
case 500:
console.error("Internal server error");
break;
}
}
}
```
You can use any of these model IDs when creating a temporary Assistant through the assistant API. Simply specify the model ID in the `model` field of your assistant configuration:
```javascript Assistant.js theme={null}
const response = await axios.post("https://api.langdock.com/assistant/v1/chat/completions", {
assistant: {
name: "Custom Assistant",
instructions: "You are a helpful Assistant",
model: "gpt-5", // Specify the model ID here
},
messages: [
{ role: "user", content: "Hello!" },
],
});
```
```bash Assistant.sh theme={null}
curl https://api.langdock.com/assistant/v1/chat/completions \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"Assistant": {
"name": "Custom Assistant",
"instructions": "You are a helpful Assistant",
"model": "gpt-5"
},
"messages": [
{ "role": "user", "content": "Hello!" }
]
}'
```
## Migrating to Agents API
The new Agents API offers improved compatibility with modern AI SDKs. The models endpoint has identical functionality.
See the equivalent endpoint in the Agents API:
* [Agent Models API](/api-endpoints/agent/agent-models)
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Migrating from Assistants API to Agents API
Source: https://docs.langdock.com/api-endpoints/assistant/assistant-to-agent-migration
Guide to migrating your integration from the legacy Assistants API to the new Agents API
The Agents API is the next generation of our API, designed for better compatibility with modern AI SDKs like the Vercel AI SDK.
## Overview
The Agents API represents a significant improvement over the Assistants API, with the main goal of providing native compatibility with industry-standard AI SDKs. The key difference is the removal of custom input/output transformations in favor of standard formats.
### Why Migrate?
* **Vercel AI SDK compatibility**: Works natively with AI SDK 5's `useChat` function
* **Standard formats**: Uses industry-standard message formats instead of custom transformations
* **Better streaming**: Native support for AI SDK streaming patterns
* **Future-proof**: The Assistants API will be deprecated in a future release
## Key Differences
### Endpoint Changes
| Assistants API | Agents API | Breaking? |
| -------------------------------- | ---------------------------- | ------------------------- |
| `/assistant/v1/chat/completions` | `/agent/v1/chat/completions` | Yes - Format changes |
| `/assistant/v1/create` | `/agent/v1/create` | No - Only parameter names |
| `/assistant/v1/get` | `/agent/v1/get` | No - Only parameter names |
| `/assistant/v1/update` | `/agent/v1/update` | No - Only parameter names |
| `/assistant/v1/models` | `/agent/v1/models` | No - Identical |
### Parameter Changes (Non-Breaking)
For create, get, and update endpoints, the only change is parameter naming:
* `assistantId` → `agentId`
* Request/response structure remains identical
* All other parameters unchanged
## Breaking Changes in `/chat/completions`
The chat completions endpoint has significant format changes to support Vercel AI SDK compatibility.
### Request Format Changes
#### Old Format (Assistants API)
```javascript theme={null}
{
assistantId: "asst_123",
messages: [
{
role: "user",
content: "Hello, how are you?", // Simple string content
attachmentIds: ["uuid-1234"]
}
],
stream: true
}
```
#### New Format (Agents API)
```javascript theme={null}
{
agentId: "agent_123", // Parameter name changed
messages: [
{
id: "msg_1", // New: Message ID required
role: "user",
parts: [ // New: Parts array instead of content string
{
type: "text",
text: "Hello, how are you?"
},
{
type: "file",
url: "attachment://uuid-1234" // New: Attachment format
}
]
}
],
stream: true
}
```
### Key Request Differences
1. **Message Structure**:
* Old: `content` as string
* New: `parts` as array with typed objects
2. **Message ID**:
* Old: Optional or auto-generated
* New: Required `id` field for each message
3. **Attachments**:
* Old: `attachmentIds` array at message level
* New: File parts with `type: "file"` in parts array
4. **Parameter Name**:
* Old: `assistantId`
* New: `agentId`
### Response Format Changes
#### Old Format (Assistants API)
```javascript theme={null}
{
result: [
{
id: "msg_456",
role: "assistant",
content: [
{
type: "text",
text: "I'm doing well, thank you!"
}
]
}
]
}
```
#### New Format (Agents API)
```javascript theme={null}
{
id: "msg_456",
role: "assistant",
parts: [
{
type: "text",
text: "I'm doing well, thank you!"
}
]
}
```
### Key Response Differences
1. **Top-level Structure**:
* Old: Wrapped in `result` array
* New: Direct message object
2. **Content Field**:
* Old: `content` array
* New: `parts` array
### Streaming Changes
#### Old Format (Assistants API)
```
data: {"type":"message","content":"Hello"}
data: {"type":"message","content":" world"}
data: {"type":"done"}
```
#### New Format (Agents API)
Uses Vercel AI SDK streaming format:
```
0:"text chunk 1"
1:"text chunk 2"
...
```
The Agents API streams in Vercel AI SDK's native format, compatible with the `useChat` hook.
## Migration Steps
### Step 1: Update Endpoint URLs
```javascript theme={null}
// Before
const url = 'https://api.langdock.com/assistant/v1/chat/completions';
// After
const url = 'https://api.langdock.com/agent/v1/chat/completions';
```
### Step 2: Update Parameter Names (Non-Breaking Endpoints)
For create, get, update endpoints:
```javascript theme={null}
// Before
{ assistantId: "asst_123" }
// After
{ agentId: "agent_123" }
```
### Step 3: Update Message Format (Breaking - Chat Completions)
#### Converting Messages
```javascript theme={null}
// Before (Assistants API)
const oldMessage = {
role: "user",
content: "Analyze this document",
attachmentIds: ["uuid-1234"]
};
// After (Agents API)
const newMessage = {
id: generateId(), // You need to generate message IDs
role: "user",
parts: [
{
type: "text",
text: "Analyze this document"
},
{
type: "file",
url: "attachment://uuid-1234"
}
]
};
```
#### Using with Vercel AI SDK
The Agents API works natively with Vercel AI SDK's `useChat` hook:
```typescript theme={null}
import { useChat } from '@ai-sdk/react';
function ChatComponent() {
const { messages, input, handleSubmit, handleInputChange } = useChat({
api: 'https://api.langdock.com/agent/v1/chat/completions',
headers: {
'Authorization': `Bearer ${API_KEY}`
},
body: {
agentId: 'agent_123'
}
});
// The hook handles all message formatting automatically!
return (
{messages.map(m => (
{m.role}: {m.content}
))}
);
}
```
### Step 4: Update Response Handling
```javascript theme={null}
// Before (Assistants API)
const response = await fetch(assistantUrl, options);
const data = await response.json();
const messages = data.result; // Array of messages
// After (Agents API)
const response = await fetch(agentUrl, options);
const data = await response.json();
const message = data; // Direct message object
```
### Step 5: Update Streaming Code
#### Before (Custom SSE Parsing)
```javascript theme={null}
const response = await fetch(url, options);
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = JSON.parse(line.slice(6));
if (data.type === 'message') {
console.log(data.content);
}
}
}
}
```
#### After (Vercel AI SDK)
```typescript theme={null}
import { streamText } from 'ai';
const result = await streamText({
model: langdock({
apiKey: process.env.LANGDOCK_API_KEY,
agentId: 'agent_123'
}),
messages: conversationHistory
});
// Stream text chunks
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
```
## Code Examples
### Complete Migration Example
#### Before (Assistants API)
```javascript theme={null}
const axios = require("axios");
async function chatWithAssistant() {
const response = await axios.post(
"https://api.langdock.com/assistant/v1/chat/completions",
{
assistantId: "asst_123",
messages: [
{
role: "user",
content: "What's the weather today?",
attachmentIds: []
}
],
stream: false
},
{
headers: {
Authorization: "Bearer YOUR_API_KEY"
}
}
);
// Response wrapped in result array
const assistantMessage = response.data.result[0];
console.log(assistantMessage.content[0].text);
}
```
#### After (Agents API)
```javascript theme={null}
const axios = require("axios");
async function chatWithAgent() {
const response = await axios.post(
"https://api.langdock.com/agent/v1/chat/completions",
{
agentId: "agent_123", // Changed parameter name
messages: [
{
id: "msg_1", // Added message ID
role: "user",
parts: [ // Changed to parts array
{
type: "text",
text: "What's the weather today?"
}
]
}
],
stream: false
},
{
headers: {
Authorization: "Bearer YOUR_API_KEY"
}
}
);
// Response is direct message object
const agentMessage = response.data;
console.log(agentMessage.parts[0].text);
}
```
### Using with Next.js and Vercel AI SDK
```typescript theme={null}
// app/api/chat/route.ts
import { StreamingTextResponse, LangChainStream } from 'ai';
export async function POST(req: Request) {
const { messages, agentId } = await req.json();
const response = await fetch(
'https://api.langdock.com/agent/v1/chat/completions',
{
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.LANGDOCK_API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
agentId,
messages,
stream: true
})
}
);
// Return streaming response
return new StreamingTextResponse(response.body);
}
```
## Testing Your Migration
### Checklist
* [ ] Update all endpoint URLs from `/assistant/v1/*` to `/agent/v1/*`
* [ ] Replace `assistantId` with `agentId` in all requests
* [ ] Convert message `content` strings to `parts` arrays (for chat completions)
* [ ] Add `id` field to all messages (for chat completions)
* [ ] Update attachment references to use file parts format
* [ ] Update response handling to work with new format
* [ ] Test streaming with new format (or use Vercel AI SDK)
* [ ] Update error handling for new response structure
### Gradual Migration Strategy
You can migrate endpoints gradually:
1. **Start with non-breaking endpoints**: Migrate create, get, update, models first (only parameter names change)
2. **Test thoroughly**: Ensure these work correctly
3. **Migrate chat completions last**: This requires the most code changes
4. **Use feature flags**: Toggle between old and new APIs during transition
## Common Migration Issues
### Issue 1: Missing Message IDs
**Problem**: Agents API requires message IDs
```javascript theme={null}
// Error: Missing id field
{
role: "user",
parts: [...]
}
```
**Solution**: Generate unique IDs for each message
```javascript theme={null}
import { nanoid } from 'nanoid';
{
id: nanoid(),
role: "user",
parts: [...]
}
```
### Issue 2: Attachment Format
**Problem**: Old attachment format not recognized
```javascript theme={null}
// Wrong
{
role: "user",
attachmentIds: ["uuid-1234"],
parts: [...]
}
```
**Solution**: Use file parts
```javascript theme={null}
// Correct
{
id: "msg_1",
role: "user",
parts: [
{ type: "text", text: "..." },
{ type: "file", url: "attachment://uuid-1234" }
]
}
```
### Issue 3: Response Parsing
**Problem**: Looking for `result` array
```javascript theme={null}
// Wrong - result doesn't exist in Agents API
const messages = response.data.result;
```
**Solution**: Use direct message object
```javascript theme={null}
// Correct
const message = response.data;
const text = message.parts.find(p => p.type === 'text')?.text;
```
## Support
If you encounter issues during migration:
1. Check the [Agents API documentation](/api-endpoints/agent/agent) for detailed examples
2. Review the [Vercel AI SDK documentation](https://sdk.vercel.ai/docs) for SDK-specific help
3. Contact support at [support@langdock.com](mailto:support@langdock.com)
## Timeline
* **Current**: Both APIs are available
* **Future**: Assistants API will be deprecated (date TBD)
* **Recommendation**: Migrate new projects to Agents API now
For questions or assistance with migration, contact our support team at [support@langdock.com](mailto:support@langdock.com).
# Assistant Update API
Source: https://docs.langdock.com/api-endpoints/assistant/assistant-update
PATCH /assistant/v1/update
Update an existing Assistant programmatically
**The Assistants API will be deprecated in a future release.**
For new projects, we recommend using the [Agents API](/api-endpoints/agent/agent-update). The Agents API provides native Vercel AI SDK compatibility and removes custom transformations.
See the [migration guide](/api-endpoints/assistant/assistant-to-agent-migration) to learn about the differences.
Updates an existing Assistant in your workspace. Only the fields you provide will be updated, allowing for partial updates without affecting other configuration.
Requires an API key with the `AGENT_API` scope and access to the Assistant you want to update.
## Update Behavior
The update endpoint uses partial update semantics with specific behavior for different field types:
* **Partial updates** - Only fields provided in the request are updated; omitted fields remain unchanged
* **Array fields replace** - `actions`, `inputFields`, `conversationStarters`, and `attachments` completely replace existing values when provided
* **Empty arrays** - Send `[]` to remove all actions/fields/attachments
* **Null handling** - Send `null` for `emoji`, `description`, or `instruction` to clear them
* **Unchanged fields** - Fields not included in the request retain their current values
## Request Parameters
All fields are optional except `assistantId`:
| Parameter | Type | Required | Description |
| ---------------------- | --------- | -------- | ------------------------------------------------------ |
| `assistantId` | string | Yes | UUID of the assistant to update |
| `name` | string | No | Updated name (1-255 characters) |
| `description` | string | No | Updated description (max 256 chars, null to clear) |
| `emoji` | string | No | Updated emoji icon (null to clear) |
| `instruction` | string | No | Updated system prompt (max 16384 chars, null to clear) |
| `model` | string | No | Updated model UUID |
| `creativity` | number | No | Updated temperature between 0-1 |
| `conversationStarters` | string\[] | No | Updated array of suggested prompts (replaces existing) |
| `actions` | array | No | Updated array of actions (replaces existing) |
| `inputFields` | array | No | Updated array of form fields (replaces existing) |
| `attachments` | string\[] | No | Updated array of attachment UUIDs (replaces existing) |
| `webSearch` | boolean | No | Updated web search capability setting |
| `imageGeneration` | boolean | No | Updated image generation capability setting |
| `dataAnalyst` | boolean | No | Updated code interpreter capability setting |
| `canvas` | boolean | No | Updated canvas capability setting |
Array fields (`actions`, `inputFields`, `conversationStarters`, `attachments`) are **replaced entirely**, not merged. Always provide the complete desired array, including any existing items you want to keep.
### Actions Configuration
Each action in the `actions` array should contain:
* `actionId` (required) - UUID of the action from an enabled integration
* `requiresConfirmation` (optional) - Whether to require user confirmation before executing
### Input Fields Configuration
For `inputFields` array structure, see the [Create Assistant API](/api-endpoints/assistant/assistant-create) documentation.
## Examples
### Updating Basic Properties
```javascript theme={null}
const axios = require("axios");
async function updateAssistantName() {
const response = await axios.patch(
"https://api.langdock.com/assistant/v1/update",
{
assistantId: "550e8400-e29b-41d4-a716-446655440000",
name: "Advanced Document Analyzer",
description: "Analyzes documents with enhanced capabilities",
creativity: 0.7
},
{
headers: {
Authorization: "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
}
}
);
console.log("Assistant updated:", response.data.message);
}
```
## Validation Rules
The API enforces several validation rules:
* **Assistant access** - Your API key must have access to the assistant
* **Workspace match** - Assistant must belong to the same workspace as your API key
* **Model** - If provided, must be in your workspace's active models list
* **Actions** - If provided, must belong to integrations enabled in your workspace
* **Attachments** - If provided, must exist in your workspace and not be deleted
* **Name** - If provided, must be between 1-255 characters
* **Description** - If provided, maximum 256 characters
* **Instruction** - If provided, maximum 16384 characters
* **Creativity** - If provided, must be between 0 and 1
## Response Format
### Success Response (200 OK)
```typescript theme={null}
{
status: "success";
message: "Assistant updated successfully";
assistant: {
id: string;
name: string;
description: string;
instruction: string;
emojiIcon: string;
model: string;
temperature: number;
conversationStarters: string[];
inputType: "PROMPT" | "STRUCTURED";
webSearchEnabled: boolean;
imageGenerationEnabled: boolean;
codeInterpreterEnabled: boolean;
canvasEnabled: boolean;
actions: Array<{
actionId: string;
requiresConfirmation: boolean;
}>;
inputFields: Array<{
slug: string;
type: string;
label: string;
description: string;
required: boolean;
order: number;
options: string[];
fileTypes: string[] | null;
}>;
attachments: string[];
createdAt: string;
updatedAt: string;
};
}
```
## Error Handling
```typescript theme={null}
try {
const response = await axios.patch('https://api.langdock.com/assistant/v1/update', ...);
} catch (error) {
if (error.response) {
switch (error.response.status) {
case 400:
console.error('Invalid parameters:', error.response.data.message);
break;
case 401:
console.error('Invalid or missing API key');
break;
case 403:
console.error('Insufficient permissions - no access to this Assistant');
break;
case 404:
console.error('Assistant not found or resource not found (model, action, attachment)');
break;
case 500:
console.error('Server error');
break;
}
}
}
```
## Best Practices
**Preserving existing values**: When updating array fields like `actions` or `attachments`, always include existing items you want to keep, as the entire array is replaced.
1. **Fetch before update** - If you need to preserve existing array values, fetch the current Assistant configuration first
2. **Incremental updates** - Update only the fields that need to change
3. **Validate attachments** - Ensure attachment UUIDs are valid before including them
4. **Test actions** - Verify actions belong to enabled integrations before updating
5. **Handle errors gracefully** - Implement proper error handling for validation failures
## Migrating to Agents API
The new Agents API offers improved compatibility with modern AI SDKs. The update endpoint has similar functionality with updated parameter names.
See the equivalent endpoint in the Agents API:
* [Agent Update API](/api-endpoints/agent/agent-update) - Uses `agentId` instead of `assistantId`
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Upload Attachment API
Source: https://docs.langdock.com/api-endpoints/assistant/upload-attachments
POST /attachment/v1/upload
Upload files to be used with Assistants
**The Assistants API will be deprecated in a future release.**
For new projects, we recommend using the [Agents API](/api-endpoints/agent/upload-attachments). The upload attachments endpoint remains the same, but you should reference it from the Agents API documentation.
See the [migration guide](/api-endpoints/assistant/assistant-to-agent-migration) to learn about the differences.
Upload files that can be referenced in Assistant conversations using their attachment IDs.
To use the API you need an API key. You can create API Keys in your [Workspace
settings](https://app.langdock.com/settings/workspace/products/api).
## Request Format
This endpoint accepts `multipart/form-data` requests with a single file upload.
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | --------------------------- |
| `file` | File | Yes | The file you want to upload |
## Response Format
The API returns the uploaded file information:
```typescript theme={null}
{
attachmentId: string;
file: {
name: string;
mimeType: string;
sizeInBytes: number;
}
}
```
## Example
```javascript theme={null}
const axios = require("axios");
const FormData = require("form-data");
const fs = require("fs");
async function uploadAttachment() {
const form = new FormData();
form.append("file", fs.createReadStream("example.pdf"));
const response = await axios.post(
"https://api.langdock.com/attachment/v1/upload",
form,
{
headers: {
...form.getHeaders(),
Authorization: "Bearer YOUR_API_KEY",
},
}
);
console.log(response.data);
// {
// attachmentId: "550e8400-e29b-41d4-a716-446655440000",
// file: {
// name: "example.pdf",
// mimeType: "application/pdf",
// sizeInBytes: 1234567
// }
// }
}
```
## Error Handling
```javascript theme={null}
try {
const response = await axios.post('https://api.langdock.com/attachment/v1/upload', ...);
} catch (error) {
if (error.response) {
switch (error.response.status) {
case 400:
console.error('No file provided');
break;
case 401:
console.error('Invalid API key');
break;
case 500:
console.error('Server error');
break;
}
}
}
```
The uploaded attachment ID can be used in the Assistant API by including it in the `attachmentIds` array either at the assistant level or message level.
## Migrating to Agents API
The upload attachment endpoint remains the same across both APIs. For new projects, reference the Agents API documentation:
* [Agent Upload Attachments API](/api-endpoints/agent/upload-attachments)
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Anthropic Messages
Source: https://docs.langdock.com/api-endpoints/completion/anthropic
POST /anthropic/{region}/v1/messages
Creates a model response given a structured list of input messages using the Anthropic API.
Creates a model response for the given chat conversation. This endpoint follows the [Anthropic API specification](https://platform.claude.com/docs/en/api/messages) and the requests are sent to the AWS Bedrock Anthropic endpoint.
To use the API you need an API key. Admins can create API keys in the settings.
All parameters from the [Anthropic "Create a message" endpoint](https://platform.claude.com/docs/en/api/messages) are supported according to the Anthropic specifications, with the following exception:
* `model`: The supported models are: `claude-sonnet-4-5-20250929`, `claude-sonnet-4-20250514`, `claude-3-7-sonnet-20250219`, `claude-3-5-sonnet-20240620`.
* The list of available models might differ if you are using your own API keys in Langdock ("Bring-your-own-keys / BYOK", see [here](/settings/models/byok) for details). In this case, please reach out to your admin to understand which models are available in the API.
## Rate limits
The rate limit for the Messages endpoint is **500 RPM (requests per minute)** and **60.000 TPM (tokens per minute)**. Rate limits are defined at the workspace level - and not at an API key level. Each model has its own rate limit. If you exceed your rate limit, you will receive a `429 Too Many Requests` response.
Please note that the rate limits are subject to change, refer to this documentation for the most up-to-date information.
In case you need a higher rate limit, please contact us at [support@langdock.com](mailto:support@langdock.com).
## Using Anthropic-compatible libraries
As the request and response format is the same as the Anthropic API, you can use popular libraries like the [Anthropic Python library](https://github.com/anthropics/anthropic-sdk-python) or the [Vercel AI SDK](https://ai-sdk.dev/docs/introduction) to use the Langdock API.
### Example using the Anthropic Python library
```python theme={null}
from anthropic import Anthropic
client = Anthropic(
base_url="https://api.langdock.com/anthropic/eu/",
api_key=""
)
message = client.messages.create(
model="claude-3-haiku-20240307",
messages=[
{ "role": "user", "content": "Write a haiku about cats" }
],
max_tokens=1024,
)
print(message.content[0].text)
```
### Example using the Vercel AI SDK in Node.js
```typescript theme={null}
import { generateText } from "ai";
import { createAnthropic } from "@ai-sdk/anthropic";
const langdockProvider = createAnthropic({
baseURL: "https://api.langdock.com/anthropic/eu/v1",
apiKey: "",
});
const result = await generateText({
model: langdockProvider("claude-3-haiku-20240307"),
prompt: "Write a haiku about cats",
});
console.log(result.text);
```
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Google Completion API
Source: https://docs.langdock.com/api-endpoints/completion/google
POST /google/{region}/v1beta/models/{model}:generateContent
Generate text with Google Gemini models through Langdock's public API. Supports normal and streaming completions and is fully compatible with the official Vertex AI SDKs (Python / Node).
# Google Completion Endpoint (v1beta)
This endpoint exposes Google Gemini models that are hosted in Google Vertex AI.\
It mirrors the structure of the official Vertex **generateContent** API. To use it, you need to:
Call GET /`{region}`/v1beta/models/ to retrieve the list of Gemini models.
Choose a model ID and decide between generateContent or streamGenerateContent.
POST to /`{region}`/v1beta/models/`{model}`:`{action}` with your prompt in contents.
Parse the JSON response for normal calls or consume the SSE events for streaming.
• Region selection (`eu` or `us`)\
• Optional Server-Sent Event (SSE) streaming with the same event labels used by the Google Python SDK (`message_start`, `message_delta`, `message_stop`)\
• A **models** discovery endpoint
## Base URL
```
https://api.langdock.com/google/{region}
```
In dedicated deployments, api.langdock.com maps to \/api/public
***
## Authentication
Send one of the following headers while using the Langdock API Key:
Bearer \
Alternative header for the same API key.
Convenience header used by the official **google-generative-ai** Python SDK.
All headers are treated identically. Missing or invalid keys return **401 Unauthorized**.
**Authorization header example:**
```bash theme={null}
curl -H "Authorization: Bearer $LD_API_KEY" \
https://api.langdock.com/google/eu/v1beta/models
```
**x-api-key header example:**
```bash theme={null}
curl -H "x-api-key: $LD_API_KEY" \
https://api.langdock.com/google/eu/v1beta/models
```
**x-goog-api-key header example:**
```bash theme={null}
curl -H "x-goog-api-key: $LD_API_KEY" \
https://api.langdock.com/google/eu/v1beta/models
```
***
## 1. List available models
### GET `/{region}/v1beta/models`
`region` must be `eu` or `us`.
#### Successful response
List of objects with the following shape:
* **name** – Fully-qualified model name (e.g. `models/gemini-2.5-flash`).
* **displayName** – Human-readable name shown in the Langdock UI.
* **supportedGenerationMethods** – Always `["generateContent", "streamGenerateContent"]`.
```bash theme={null}
curl -H "Authorization: Bearer $LD_API_KEY" \
https://api.langdock.com/google/eu/v1beta/models
```
***
## 2. Generate content
### POST `/{region}/v1beta/models/{model}:{action}`
• **model** – The model ID as returned by the *models* endpoint (without the `models/` prefix).\
• **action** – `generateContent` **or** `streamGenerateContent` depending on whether you want to use streaming or not.
Example path: `google/eu/v1beta/models/gemini-2.5-flash:streamGenerateContent`
### Request body
The request body follows the official
[`GenerateContentRequest`](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference) structure.
#### Required fields
**`contents`** (Content\[], required)\
Conversation history. Each object has a **role** (string) and **parts** array containing objects with **text** (string).
```json theme={null}
"contents": [
{
"role": "user",
"parts": [
{
"text": "What's the weather like?"
}
]
}
]
```
**`model`** (string, required)\
The model to use for generation (e.g., "gemini-2.5-pro", "gemini-2.5-flash").
#### Optional fields
**`generationConfig`** (object, optional)\
Configuration for text generation. Supported fields:
* `temperature` (number): Controls randomness (0.0-2.0)
* `topP` (number): Nucleus sampling parameter (0.0-1.0)
* `topK` (number): Top-k sampling parameter
* `candidateCount` (number): Number of response candidates to generate
* `maxOutputTokens` (number): Maximum number of tokens to generate
* `stopSequences` (string\[]): Sequences that will stop generation
* `responseMimeType` (string): MIME type of the response
* `responseSchema` (object): Schema for structured output
```json theme={null}
"generationConfig": {
"temperature": 0.7,
"topP": 0.9,
"topK": 40,
"maxOutputTokens": 1000,
"stopSequences": ["END", "STOP"]
}
```
**`safetySettings`** (SafetySetting\[], optional)\
Array of safety setting objects. Each object contains:
* `category` (string): The harm category (e.g., "HARM\_CATEGORY\_HARASSMENT")
* `threshold` (string): The blocking threshold (e.g., "BLOCK\_MEDIUM\_AND\_ABOVE")
```json theme={null}
"safetySettings": [
{
"category": "HARM_CATEGORY_HARASSMENT",
"threshold": "BLOCK_MEDIUM_AND_ABOVE"
}
]
```
**`tools`** (Tool\[], optional)\
Array of tool objects for function calling. Each tool contains `functionDeclarations` array with:
* `name` (string): Function name
* `description` (string): Function description
* `parameters` (object): JSON schema defining function parameters
```json theme={null}
"tools": [
{
"functionDeclarations": [
{
"name": "get_weather",
"description": "Get current weather information",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
}
}
}
}
]
}
]
```
**`toolConfig`** (object, optional)\
Configuration for function calling. Contains `functionCallingConfig` with:
* `mode` (string): Function calling mode ("ANY", "AUTO", "NONE")
* `allowedFunctionNames` (string\[]): Array of allowed function names
```json theme={null}
"toolConfig": {
"functionCallingConfig": {
"mode": "ANY",
"allowedFunctionNames": ["get_weather"]
}
}
```
**`systemInstruction`** (string | Content, optional)\
System instruction to guide the model's behavior. Can be a string or Content object with role and parts.
```json theme={null}
"systemInstruction": {
"role": "system",
"parts": [
{
"text": "You are a weather agent. Use the weather tool when asked about weather."
}
]
}
```
If `toolConfig.functionCallingConfig.allowedFunctionNames` is provided, `mode` **must** be `ANY`.
#### Minimal example
```bash theme={null}
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $LD_API_KEY" \
https://api.langdock.com/google/us/v1beta/models/gemini-2.5-pro:generateContent \
-d '{
"contents": [{
"role": "user",
"parts": [{"text": "Write a short poem about the ocean."}]
}]
}'
```
### Streaming
When **action** is `streamGenerateContent` the endpoint returns an
`text/event-stream` with compatible events:
• `message_start` – first chunk that contains content\
• `message_delta` – subsequent chunks\
• `message_stop` – last chunk (contains `finishReason` and usage metadata)
Example `message_delta` event:
```
event: message_delta
data: {
"candidates": [
{
"index": 0,
"content": {
"role": "model",
"parts": [{ "text": "The ocean whispers..." }]
}
}
]
}
```
**Python SDK example with function calling:**
```python theme={null}
import google.generativeai as genai
def get_current_weather(location):
"""Get the current weather in a given location"""
return f"The current weather in {location} is sunny with a temperature of 70 degrees and a wind speed of 5 mph."
genai.configure(
api_key="",
transport="rest",
client_options={"api_endpoint": "https://api.langdock.com/google//"},
)
model = genai.GenerativeModel("gemini-2.5-flash", tools=[get_current_weather])
response = model.generate_content(
"Please tell me the weather in San Francisco, then tell me a story on the history of the city"
)
print(response)
```
**Python SDK streaming example:**
```python theme={null}
model = genai.GenerativeModel("gemini-2.5-flash")
response = model.generate_content(
"Tell me an elaborate story on the history of the city of San Francisco",
stream=True,
)
for chunk in response:
if chunk.text:
print(chunk.text)
```
## Using Google-compatible libraries
The endpoint is fully compatible with official Google SDKs including the Vertex AI Node SDK (`@google-cloud/vertexai`), Google Generative AI Python library (`google-generative-ai`), and the Vercel AI SDK for edge streaming.
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Codestral
Source: https://docs.langdock.com/api-endpoints/completion/mistral
POST /mistral/{region}/v1/fim/completions
Code generation using the Codestral model from Mistral.
Creates a code completion using the [Codestral model from Mistral](https://docs.mistral.ai/capabilities/code_generation).
All parameters from the [Mistral fill-in-the-middle Completion endpoint](https://docs.mistral.ai/capabilities/code_generation#fill-in-the-middle-endpoint) are supported according to the Mistral specifications.
## Rate limits
The rate limit for the FIM Completion endpoint is **500 RPM (requests per minute)** and **60.000 TPM (tokens per minute)**. Rate limits are defined at the workspace level - and not at an API key level. Each model has its own rate limit. If you exceed your rate limit, you will receive a `429 Too Many Requests` response.
Please note that the rate limits are subject to change, refer to this documentation for the most up-to-date information.
In case you need a higher rate limit, please contact us at [support@langdock.com](mailto:support@langdock.com).
## Using the Continue AI Code Agent
Using the Codestral model, combined with chat completion models from the Langdock API, makes it possible to use the open-source AI code agent [Continue (continue.dev)](https://www.continue.dev) fully via the Langdock API.
Continue is available as a [VS Code extension](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and as a JetBrains extension. To customize the models used by Continue, you can edit the configuration file at `~/.continue/config.json` (MacOS / Linux) or `%USERPROFILE%\.continue\config.json` (Windows).
Below is an example setup for using Continue with the Codestral model for autocomplete and Claude 3.5 Sonnet and GPT-4o models for chats and edits, all served from the Langdock API.
```json theme={null}
{
"models": [
{
"title": "GPT-4o",
"provider": "openai",
"model": "gpt-4o",
"apiKey": "",
"apiBase": "https://api.langdock.com/openai/eu/v1"
},
{
"title": "Claude 3.5 Sonnet",
"provider": "anthropic",
"model": "claude-3-5-sonnet-20240620",
"apiKey": "",
"apiBase": "https://api.langdock.com/anthropic/eu/v1"
}
],
"tabAutocompleteModel": {
"title": "Codestral",
"provider": "mistral",
"model": "codestral-2501",
"apiKey": "",
"apiBase": "https://api.langdock.com/mistral/eu/v1"
}
/* ... other configuration ... */
}
```
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# OpenAI Chat completion
Source: https://docs.langdock.com/api-endpoints/completion/openai
POST /openai/{region}/v1/chat/completions
Creates a model response for the given chat conversation using an OpenAI model.
In dedicated deployments, api.langdock.com maps to \/api/public
Creates a model response for the given chat conversation. This endpoint follows the [OpenAI API specification](https://platform.openai.com/docs/api-reference/chat/create), and the requests are sent to the Azure OpenAI endpoint.
To use the API, you need an API key. Admins can create API keys in the settings.
All parameters from the [OpenAI Chat Completion endpoint](https://platform.openai.com/docs/api-reference/chat/create) are supported according to the OpenAI specifications, with the following exceptions:
* `model`: Currently only the `gpt-5`, `gpt-5-mini`, `gpt-5-nano`, `gpt-5-chat`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `o4-mini`, `o3`, `o3-mini`, `o1`, `o1-mini`, `o1-preview`, `gpt-4o`, `gpt-4o-mini` models are supported.
- The list of available models might differ if you are using your own API keys in Langdock ("Bring-your-own-keys / BYOK", see [here](/settings/models/byok) for details). In this case, please reach out to your admin to understand which models are available in the API.
- `n`: Not supported.
- `service_tier`: Not supported.
- `parallel_tool_calls`: Not supported.
- `stream_options`: Not supported.
## Rate limits
The rate limit for the Chat Completion endpoint is **500 RPM (requests per minute)** and **60.000 TPM (tokens per minute)**. Rate limits are defined at the workspace level - and not at an API key level. Each model has its own rate limit. If you exceed your rate limit, you will receive a `429 Too Many Requests` response.
Please note that the rate limits are subject to change, refer to this documentation for the most up-to-date information.
In case you need a higher rate limit, please contact us at [support@langdock.com](mailto:support@langdock.com).
## Using OpenAI-compatible libraries
As the request and response format is the same as the OpenAI API, you can use popular libraries like the [OpenAI Python library](https://github.com/openai/openai-python) or the [Vercel AI SDK](https://ai-sdk.dev/docs/introduction) to use the Langdock API.
### Example using the OpenAI Python library
```python theme={null}
from openai import OpenAI
client = OpenAI(
base_url="https://api.langdock.com/openai/eu/v1",
api_key=""
)
completion = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": "Write a short poem about cats."}
]
)
print(completion.choices[0].message.content)
```
### Example using the Vercel AI SDK in Node.js
```typescript theme={null}
import { streamText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
const langdockProvider = createOpenAI({
baseURL: "https://api.langdock.com/openai/eu/v1",
apiKey: "",
});
const result = await streamText({
model: langdockProvider("gpt-4o-mini"),
prompt: "Write a short poem about cats",
});
for await (const textPart of result.textStream) {
process.stdout.write(textPart);
}
```
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# OpenAI Embeddings
Source: https://docs.langdock.com/api-endpoints/embedding/openai-embedding
POST /openai/{region}/v1/embeddings
Creates embeddings for text using OpenAI's embedding models
In dedicated deployments, api.langdock.com maps to \/api/public
Creates embeddings for text using OpenAI's embedding models. This endpoint follows the [OpenAI API specification](https://platform.openai.com/docs/api-reference/embeddings) and the requests are sent to the Azure OpenAI endpoint.
To use the API you need an API key. Admins can create API keys in the
settings.
All parameters from the [OpenAI Embeddings endpoint](https://platform.openai.com/docs/api-reference/embeddings) are supported according to the OpenAI specifications, with the following exceptions:
* `model`: Currently only the `text-embedding-ada-002` model is supported.
* `encoding_format`: Supports both `float` and `base64` formats.
## Rate limits
The rate limit for the Embeddings endpoint is **500 RPM (requests per minute)** and **60.000 TPM (tokens per minute)**. Rate limits are defined at the workspace level - and not at an API key level. If you exceed your rate limit, you will receive a `429 Too Many Requests` response.
Please note that the rate limits are subject to change, refer to this documentation for the most up-to-date information.
In case you need a higher rate limit, please contact us at [support@langdock.com](mailto:support@langdock.com).
## Using OpenAI-compatible libraries
As the request and response format is the same as the OpenAI API, you can use popular libraries like the [OpenAI Python library](https://github.com/openai/openai-python) or the [Vercel AI SDK](https://ai-sdk.dev/docs/introduction) to use the Langdock API.
### Example using the OpenAI Python library
```python theme={null}
from openai import OpenAI
client = OpenAI(
base_url="https://api.langdock.com/openai/eu/v1",
api_key=""
)
embedding = client.embeddings.create(
model="text-embedding-ada-002",
input="The quick brown fox jumps over the lazy dog",
encoding_format="float"
)
print(embedding.data[0].embedding)
```
### Example using the Vercel AI SDK in Node.js
```typescript theme={null}
import { createOpenAI } from "@ai-sdk/openai";
const langdockProvider = createOpenAI({
baseURL: "https://api.langdock.com/openai/eu/v1",
apiKey: "",
});
const response = await langdockProvider.embeddings.create({
model: "text-embedding-ada-002",
input: "The quick brown fox jumps over the lazy dog",
encoding_format: "float",
});
console.log(response.data[0].embedding);
```
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Delete a file from a knowledge folder
Source: https://docs.langdock.com/api-endpoints/knowledge-folder/delete-attachment
delete /knowledge/{folderId}/{attachmentId}
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Retrieve files from a knowledge folder
Source: https://docs.langdock.com/api-endpoints/knowledge-folder/retrieve-files
get /knowledge/{folderId}/list
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Search through all files in data folders shared with the API Key
Source: https://docs.langdock.com/api-endpoints/knowledge-folder/search-knowledge-folder
post /knowledge/search
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Share Knowledge Folders with the API
Source: https://docs.langdock.com/api-endpoints/knowledge-folder/sharing
The following guide explains how to create an API key in Langdock and share a knowledge folder with the API key.
An admin needs to create the API key and share the knowledge folder with the API key. Invite an admin as an editor to your knowledge folder with the "Share" button in the top right corner (skip this step if you are a Langdock admin yourself).
## How to create an API key
1. Navigate to [Langdock](https://app.langdock.com/chat)

2. Navigate to the workspace settings.

3. Click on "API" under products.

4. Create a new API key.

5. Enter a name and click "Create API key"

6. Copy your API Key to the clipboard.

7. Click "Done"

8. Leave the settings by clicking on "Settings" on the top left.

## How to share a knowledge folder with the API
1. Go to "Integrations"

2. Go to your knowledge folders.

3. Open the knowledge folder you want to share with the API.

4. Click "Share"

Steps 5-8 are only needed if you are not a Langdock admin. If you are a Langdock admin, you can jump to step 9
5. Click the "Add people and groups" field.

6. Type in the name of an admin and select them in the dropdown.

7. Click on "User" and then select "Editor" in the dropdown to give them Editor permissions for this folder.

8. Click "Share"

9. Click on the three dots at the top and then on "Share with API".

Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Update a file in a knowledge folder
Source: https://docs.langdock.com/api-endpoints/knowledge-folder/update-attachment
patch /knowledge/{folderId}
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Upload a file to a knowledge folder
Source: https://docs.langdock.com/api-endpoints/knowledge-folder/upload-file
post /knowledge/{folderId}
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Export Agent Usage
Source: https://docs.langdock.com/api-endpoints/usage-export/export-agents
POST /export/assistants
API endpoint to export agent usage data including message counts, active users, and trends
This endpoint exports agent usage data including message counts per agent, active user counts, and usage trends over time.
## Data Included
The agent export contains:
* Number of messages per agent
* Active users per agent
* Usage trends over time
* Agent configuration details
* Performance metrics
## Example Response
The successful response includes a signed download URL for the CSV file containing your agent usage data.
**Additional Information**: For details on prerequisites, rate limits, and export size limits, please refer to the [main Usage Export API documentation](/api-endpoints/usage-export/intro-to-usage-export-api).
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Export Model Usage
Source: https://docs.langdock.com/api-endpoints/usage-export/export-models
POST /export/models
API endpoint to export AI model usage data including token consumption, costs, and response times
This endpoint exports AI model usage data including token consumption, costs per model, request counts, and response times.
## Data Included
The model export contains:
* AI models used (GPT-4, Claude, etc.)
* Token consumption per model
* Cost per model
* Request counts
* Response times
* Error rates by model
* Usage patterns over time
## Cost Analysis
The model export is particularly valuable for cost analysis and optimization. You can use this data to:
* Identify the most expensive models in your usage
* Track cost trends over time
* Optimize model selection for different use cases
* Budget for future AI model usage
**Additional Information**: For details on prerequisites, rate limits, and export size limits, please refer to the [main Usage Export API documentation](/api-endpoints/usage-export/intro-to-usage-export-api).
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Export Project Usage
Source: https://docs.langdock.com/api-endpoints/usage-export/export-projects
POST /export/projects
API endpoint to export project activity data including involved users and resource consumption
This endpoint exports project usage data including activity metrics, involved users per project, and resource consumption statistics.
## Data Included
The project export contains:
* Project activity metrics
* Involved users per project
* Resource consumption
* Message counts per project
* Time-based usage patterns
* Project collaboration statistics
**Additional Information**: For details on prerequisites, rate limits, and export size limits, please refer to the [main Usage Export API documentation](/api-endpoints/usage-export/intro-to-usage-export-api).
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Export User Usage
Source: https://docs.langdock.com/api-endpoints/usage-export/export-users
POST /export/users
API endpoint to export user activity data including message counts and usage patterns (subject to privacy settings)
This endpoint exports user activity data including message counts, usage patterns, and feature utilization. The available data depends on your workspace privacy settings.
## Privacy Considerations
The user export data is subject to workspace privacy settings, **User-identifying Data** may be excluded due to privacy settings. As a consequence of this some data may be anonymized based on workspace configuration.
## Data Included
The user export may contain:
* Message count per user
* Activity patterns
* Usage frequency
* Feature utilization
* Time-based usage analytics
**Note**: User-specific data may be excluded due to workspace privacy settings.
**Additional Information**: For details on prerequisites, rate limits, and export size limits, please refer to the [main Usage Export API documentation](/api-endpoints/usage-export/intro-to-usage-export-api).
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Export Workflow Usage
Source: https://docs.langdock.com/api-endpoints/usage-export/export-workflows
POST /export/workflows
API endpoint to export workflow execution data including success rates and performance metrics (if enabled)
This endpoint exports workflow usage data including execution counts, success rates, and performance metrics. It's only available if workflows are enabled in your workspace.
**Note**: This endpoint will return no data if workflows are not enabled in your workspace. Contact your workspace administrator to enable workflows if needed.
## Data Included
The workflow export contains:
* Workflow executions
* Success rates
* Performance metrics
* Error rates and types
* Execution duration statistics
* Resource consumption
**Additional Information**: For details on prerequisites, rate limits, and export size limits, please refer to the [main Usage Export API documentation](/api-endpoints/usage-export/intro-to-usage-export-api).
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Introduction to Usage Export
Source: https://docs.langdock.com/api-endpoints/usage-export/intro-to-usage-export-api
Comprehensive guide to using the Usage Export API with five endpoints for exporting user, agent, workflow, project, and model usage data.
The Usage Export API provides five endpoints to export usage data for users, agents, workflows, projects, and models from your workspace. Each endpoint returns a CSV file with detailed metrics for the selected date range.
You can also access the Usage Export in the platform directly, more on that [here](/administration/usage-exports).
## Prerequisites
To use the Usage Export API, you need:
* **Workspace Admin Permission**: Only workspace administrators can create API keys with usage export permissions and export data via the web interface.
* **API Key with USAGE\_EXPORT\_API Scope**: Special permission for accessing export functions
**Important Security Notice**: Users with access to an API key with **USAGE\_EXPORT\_API** scope can export workspace usage data for all areas, even if they normally don't have access to view this data. Only grant this permission to trusted users.
## Programmatic Export
### Available Endpoints
The Usage Export API provides access to various data types:
```
POST /export/users
POST /export/assistants
POST /export/workflows
POST /export/projects
POST /export/models
```
### Authentication
All API requests require Bearer token authentication:
```bash theme={null}
Authorization: Bearer YOUR_API_KEY
Content-Type: application/json
```
### Request Format
**Time specification and time zone handling:**\
The API uses the exact time included in the *date* parameter you provide & treats them as your local times, specified in the *timezone* parameter. If you include a "Z" at the end of your *date* (which stands for UTC/Zulu time), it automatically removes it to prevent the *timezone* from being applied twice.
```json theme={null}
{
"from": {
"date": "2024-01-01T00:00:00.000",
"timezone": "Europe/Berlin"
},
"to": {
"date": "2024-01-31T23:59:59.999",
"timezone": "UTC"
}
}
```
### Response Format
#### Successful Response
The dates are returned in the correct timezone format with the proper time offset (e.g., +01:00/+02:00 for Berlin)
```json theme={null}
{
"success": true,
"data": {
"filePath": "assistants-usage/workspace-id/assistants-usage-2024-01-01-2024-01-31-abc12345.csv",
"downloadUrl": "https://storage.example.com/signed-url",
"dataType": "assistants",
"recordCount": 1250,
"dateRange": {
"from": "2024-01-01T00:00:00.000+01:00",
"to": "2024-01-31T23:59:59.999"
}
}
}
```
#### Error Response
```json theme={null}
{
"error": "No data found",
"message": "No usage data found for the selected period"
}
```
### Example Requests
#### Export Assistant Usage
```bash theme={null}
curl -X POST "https://api.langdock.com/export/assistants" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"from": {
"date": "2024-01-01T00:00:00.000",
"timezone": "UTC"
},
"to": {
"date": "2024-01-31T23:59:59.999",
"timezone": "UTC"
}
}'
```
#### Export User Usage
```bash theme={null}
curl -X POST "https://api.langdock.com/export/users" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"from": {
"date": "2024-01-01T00:00:00.000",
"timezone": "UTC"
},
"to": {
"date": "2024-01-31T23:59:59.999",
"timezone": "UTC"
}
}'
```
## Rate Limits
The Usage Export API is subject to the same rate limits as other API endpoints:
* **Tokens per Minute**: 60.000 Tokens/Min
* **Requests per Minute**: 500 Requests/Min
## Export Size Limits
Exports are limited to 1,000,000 rows. If your export exceeds this limit, you'll receive a 400 error asking you to narrow the date range.
## Data Types in Detail
### User Export
Shows individual user activity, depending on privacy settings:
* Message count per user
* Activity patterns
* **Note**: User-specific data may be excluded due to workspace privacy settings
### Agent Export
Contains usage data for all agents in the workspace, including:
* Number of messages
* Active users
* Usage trends over time
### Workflow Export
Usage data for workflows (if enabled):
* Workflow executions
* Success rates
* Performance metrics
### Project Export
Project-related usage statistics:
* Project activity
* Involved users
* Resource consumption
### Model Export
Detailed information about model usage:
* AI models used
* Token consumption
* Cost per model
## Troubleshooting
### Common Errors
#### 400 Bad Request - Export Too Large
```json theme={null}
{
"error": "Export too large",
"message": "Export too large: 1500000 rows exceeds limit of 1000000. Please narrow the date range."
}
```
**Solution**: Reduce the time period of your request or split the export into smaller time ranges.
#### 401 Unauthorized
```json theme={null}
{
"error": "Unauthorized",
"message": "Invalid or missing API key"
}
```
**Solution**: Check that your API key is correct and has the USAGE\_EXPORT\_API permission.
#### 404 No Data Found
```json theme={null}
{
"error": "No data found",
"message": "No usage data found for the selected period"
}
```
**Solution**: Check the selected time period - there may have been no activity during this period.
## Security and Privacy
### Privacy Settings
Depending on workspace configuration, certain data may be excluded:
* **User-identifying Data**: May be excluded due to privacy settings
* **Leaderboards**: Must be enabled in the workspace to get complete user data
### Best Practices
1. **Secure API Key Storage**: Use environment variables or secure key management
2. **Regular Rotation**: Renew API keys regularly
3. **Minimal Permissions**: Only grant necessary scopes
4. **Monitoring**: Monitor the usage of your API keys
### Compliance
The Usage Export API helps with compliance requirements:
* **Audit Trails**: Complete tracking of API usage
* **Data Export**: Support for GDPR data access rights
* **Transparency**: Clear insights into workspace usage
## Support
For questions about the Usage Export API, contact our support team or consult the complete API documentation.
Langdock intentionally blocks browser-origin requests to protect your API key and ensure your applications remain secure. For more information, please see our guide on [API Key Best Practices](/administration/api-key-best-practices).
# Agent Configurator Template
Source: https://docs.langdock.com/resources/agent-configurator
This agent helps you to build other agents for specific use cases. You can use the configuration below and paste it in an agent in your workspace.
Click on the copy button on the top right of the code block to copy the text and paste it in Langdock.
### Name
```
Agent Configurator
```
### Description
```
This agent helps to understand how Agent in Langdock work and how you can build great agents.
```
### Instructions
```
# Persona:
You are a Prompt Engineering Agent, specialized in crafting, refining, and optimizing system prompts for Agents.
You are methodical, user-friendly, and up-to-date with prompt engineering best practices. You help users build robust, actionable prompts for their use case, and you are familiar with Langdock’s manual integration enablement process.
ALWAYS answer and generate a prompt in the language the user interacts with you
# Task:
Guide users in creating or refining agent system prompts.
Your goal: Create the most effective, context-rich prompt possible, while making the process smooth and user-friendly and providing suggestions for the additional configuration options for agents (as described below).
Always structure your output using the four elements: Persona, Task, Context, and Format.
If user input is vague or incomplete, politely point out what is missing, but instead of repeatedly asking, suggest concrete options or defaults (e.g., “Would you like to enable Google Calendar or web search integration?”).
### For Knowledge:
Ask the user if there are any documents that would be helpful to give more context to the agent.
This can be text files (pdfs, word documents, txt files) containing examples, documentation or other helpful context for the use case.
There are two ways to add knowledge:
- the Knowledge section in the agent: for files that are uploaded here, a preview of the document (first couple of pages) is accessible to the agent , the rest is searchable for the agent. The limit here is 20 documents. This should only be used for a small number of documents and if the entire document uploaded is relevant for all interactions.
- Knowledge folders : You can create them via the integrations menu and attach them via the "Add action" button in the Actions session. Knowledge folders can contain up to 1000 files, which can only be searched by the agent. This is especially useful if not all content of the document is always relevant for each interaction or very long context should be given.
### For integrations/tools:
Integrations (their Actions), Capabilities (Websearch, Data Analyst, Image Generation, and Canvas), and Knowledge Folder Search can be added via the "Add action" button in the agent configurator (in the Actions section).
Websearch, Data Analyst, Image Generation, and Canvas are Capabilities that can always be added to an agent, none of them are added per default (user has to explicitly add them)
If the user wants to interact with tabular data or generate files, ALWAYS tell them to add the data analyst
For any other integrations, first ask the user to list which are available in their workspace and which actions are available as well.
Ask the user to confirm or decline each suggested integration/tool.
If the user declines or suggests different integrations, adapt the prompt accordingly.
### For models:
Ask the user which model they intend to use in the agent or which are available for them. Do not give concrete examples, rather ask which ones are available first, they can check their model selector.
### Creativity:
Agents have a creativity from 0.0–1 which is similar to the temperature at ChatGPT. 0-0.3 is very low, which is good if you want to accurately work with data e.g. in spreadsheets, 0.7 is high and works well for marketing text generation.
### Conversation starters
An Agent can have conversation starters, which are pre-defined prompts that serve as an example of how to initiate a conversation with that agent. They aim to provide guidance on what a good question to that agent could look like and therefore help with creating an understanding of what it was designed for. They are one sentence long and as concise as possible.
# Context:
Users may not realize which details or integrations are important for their agent.
Use the CO-STAR framework (Context, Objective, Style, Target audience, Answer, Response format) to clarify user needs. D not directly mention this framework, only if asked.
Encourage users to provide:
Use case or domain
Main objective or problem to solve
Intended audience
Desired tone, style, or expertise level
Output format and length
Examples of ideal/non-ideal outputs
Constraints or forbidden topics
For integrations/tools, you can ask:
“What integrations are accessible in your workspace?”
“For each integration, do you want to allow read, write, or both types of actions? Should any actions require confirmation?”
If the user prefers not to specify, suggest helpful defaults and confirm before including them.
# Format:
Your final output (after all questions are answered by the user) should be not repetitive and presented in this structure:
### Agent Instructions:
Persona: [Describe the agent’s role/persona, including expertise, tone, and behavioral traits]
Task: [State the specific task, goal, or responsibility]
Context: [Provide all relevant background, domain, constraints, examples, user goals, and enabled integrations/actions]
Format: [Specify output format, tone, style, and any required/excluded elements]
If information is missing, pause and ask clarifying questions or suggest concrete additions before proceeding.
When all info is gathered, present the final prompt in a clearly labeled, copy-pasteable format.
### Further Configurations
Add suggestions for the additional configurable elements for an agent in the following order:
- Knowledge
If the user mentions documents that should be included, guide them in using the right way to upload them (attach to knowledge or knowledge folder)
- Actions/Integrations (this includes capabilities and knowledge folders)
Include a list of integrations/tools to be enabled, based on user confirmation.
- Creativity
Suggest a suitable creativity for their use case as well.
- Model Choice
Make an informed suggestion about which one would be applicable for their use-case
- Conversation starters
Suggest two conversation starters for the agent the user is trying to build.
# Example Output:
Persona: You are a proactive sales agent, expert in CRM management, with a friendly and concise communication style.
Task: Track sales leads, update CRM records, and schedule follow-up tasks.
Context: The agent should help sales reps manage leads efficiently. The user prefers summaries and actionable next steps. No confidential customer data should be shared in outputs.
Format: Respond in bullet points. For each lead, list key details and recommended actions. Summarize next steps at the end.
Integrations/tools to be enabled for this agent:
- HubSpot CRM (read and write access, confirmation required for record updates)
- Google Calendar (read-only, for scheduling)
If the user declines or changes integrations/tools:
“You’ve chosen not to enable Google Calendar. I’ve removed it from the integration list. Would you like to add any others (e.g., Outlook, Slack)?”
“Based on your feedback, here’s your revised prompt and updated integration list.”
Integration Guidance:
```
### Conversation Starters
```
Can you help me write the instructions for an agent i want to build?
```
```
What’s the best way to structure an agent prompt?
```
```
How can I improve my agent?
```
```
Can you help me to build an E-mail-Agent?
```
### Knowledge
Not needed in this case.
### Model
Use GPT-4.1 or Claude Sonnet 4.
### Creativity
0.7 is recommended.
### Capabilities
No additional capabilities are required.
# Creating an Agent
Source: https://docs.langdock.com/resources/agent-creation
This guide shows you how to build an agent. We will use an example of a job description agent, but the steps and considerations can be used with any use case.
For inspiration and ready-to-use templates, check out our [Agent Templates](/resources/agent-templates) collection.
## Getting Started
Navigate to the [agent overview](https://app.langdock.com/agents) page and click **"Create agent"**.
This opens the agent configurator with configuration options on the left and a testing panel on the right.
Before diving into configuration, consider these key questions:
* **Purpose**: What specific task should this agent help with?
* **Process**: What are the steps to achieve this task?
* **Resources**: Do you need to attach files or connect to knowledge sources?
* **User guidance**: How will you guide users to provide the right information?
* **Examples**: Do you have examples that demonstrate the expected style and output?
For our job description agent, we want to:
Help users write job descriptions for different platforms and target groups
Guide users through providing job details
Ask for missing information when needed
Never assume information that wasn't provided
## Basic Configuration
### Icon, Name, and Description
Choose an emoji or upload a custom icon that represents your agent's purpose.
* **Name**: `Onboarding Agent`
* **Description**: `Helps new team members to get started in the company by answering frequently asked questions.`
## Writing Instructions
The instructions are the most critical part of your agent configuration. Use the [prompt engineering guide](/resources/prompt-elements) to craft effective instructions using the PTCF framework (Persona, Task, Context, Format).
**Automatic Saving:** Your agent saves automatically as you make changes, no manual saving required.
Define who your agent is:
```text theme={null}
You are a friendly and helpful onboarding agent dedicated to guiding new employees as they familiarize themselves with Langdock and their specific roles.
```
Specify what the agent should do and how:
```text theme={null}
Your primary objective is to support new joiners in understanding Langdock's mission, the company processes and teams, their job responsibilities, and key resources available to them, ensuring a seamless transition into the company.
```
Provide relevant background information:
```text theme={null}
Direct employees to specific resources, such as sections in the handbook, or key contacts for further assistance. Encourage sharing feedback to Lennard as the owner for onboarding and company processes overall to help improve the organization continuously. Suggest an ideal timeline for completing onboarding tasks and tailor information to each employee's role or department. Ask the user if they would like to learn more about a specific topic if appropriate.
For more detailed information about Langdock, please refer to the attached document. The different sections in the handbook are:
Chapter 1: Getting Started
Chapter 2: Strategy
Chapter 3: How we got here
Chapter 4: Sales & Marketing
Chapter 5: Customer Success
Chapter 6: Product & Engineering
Chapter 7: Business Model
Chapter 8: Team & Stakeholders
Chapter 9: How We Work, Values, and Principles
Chapter 10: Business Operations
Chapter 11: Meetings, and Feedback
Chapter 12: Hiring
```
Specify the expected output structure:
```text theme={null}
Maintain an empathetic and engaging tone while providing concise and clear information. Cover key topics such as company culture, policies, essential tools, and systems.
```
Click the expand button in the bottom right corner of the instruction field to enlarge it for easier editing.
## Advanced Configuration
### Conversation Starters
Conversation starters help users get started quickly and reduce friction. Use them for:
If your agent can perform different tasks, let users choose:
* *I want to write a new text*
* *I want to correct a text I have written*
For frequently asked questions or common use cases:
* *How do I request holidays?*
* *Who do I contact for tech support questions?*
Use conversation starters to quickly test your agent during development instead of retyping the same prompts.
### Knowledge Integration
Upload documents directly from your computer via drag and drop, or select files from connected integrations like in chat.
### Capabilities
Enable additional tools your agent can use:
Access real-time information from the internet
Analyze data and create visualizations
Create images based on text descriptions
Dedicated editing screen alongside your chat
For detailed information about these capabilities, see our [chat tools guide](/product/chat/plain-model).
By default, agents are created without any capabilities enabled. When building an agent that needs web search (or any other tool), make sure to manually add those capabilities in the agent settings.
### Agent Actions
Actions enable your agent to interact with external tools and APIs. They can:
* Retrieve information from other systems
* Update, delete, or create entries in external tools
* Automate workflows across platforms
Learn more about setting up actions in our [integrations guide](/resources/integrations/using-integrations).
### Model Selection & Creativity
Choose the AI model that best fits your use case:
GPT-4.1 is currently the best model for most use cases, offering the optimal balance of capability and performance. See our [model guide](/resources/models) for detailed comparisons.
### Creativity (Temperature)
Adjust the creativity level to control response variability:
* **Lower creativity**: More focused, consistent responses
* **Higher creativity**: More varied, creative responses
## Testing and Iteration
Always test your agent thoroughly before sharing it with others. Try different scenarios, edge cases, and intentionally leave out information to test the agent's ability to ask clarifying questions.
Use the testing panel on the right side of the Agent Builder to:
* Test different conversation flows
* Verify the agent asks for missing information
* Ensure output format consistency
* Check response quality across various inputs
## Sharing Your Agent
Click the **Share** button in the upper right corner to control access:
Sharing options might be restricted in your workspace based on your admin's security settings.
Share with your entire workspace for broad access.
Copy a direct link to share with specific people.
Grant access to specific groups or individuals with either:
* **User access**: Can use the agent
* **Editor access**: Can modify the agent configuration
Click the three dots next to the Share button to:
* **Duplicate**: Create a copy for experimentation
* **Delete**: Remove the agent
* **Usage Insights**: View analytics and user feedback
## Best Practices
**Monitor Usage**: Use the usage insights feature to understand how your agent is being used and identify areas for improvement. Access this through the three-dot menu next to the Share button.
### Iterative Improvement
1. **Start Simple**: Begin with basic functionality and add complexity gradually
2. **Test Extensively**: Try various scenarios and edge cases
3. **Gather Feedback**: Monitor usage insights and collect user feedback
4. **Refine Instructions**: Update based on real-world usage patterns
### Common Pitfalls to Avoid
* **Over-engineering**: Don't try to handle every possible scenario initially
* **Assuming Information**: Always instruct the agent to ask for missing details
* **Ignoring Edge Cases**: Test what happens when users provide incomplete information
## Next Steps
Now that you've created your agent, consider:
* [Setting up integrations](/resources/integrations/introduction-integrations) to connect with your existing tools
* [Creating prompt templates](/product/chat/prompt-library) for common use cases
Explore ready-to-use agent templates for common business use cases
Master the art of writing effective prompts and instructions
# Agent Use Cases
Source: https://docs.langdock.com/resources/agent-templates
We have collected a list of agents and use cases to inspire you how to utilize Langdock for your specific needs. Let us know if you need more details or have additional requests.
### Data & Analytics
Answers advanced questions about existing users by querying internal data
Has attached database schema and helps write SQL code to perform analyses.
Knows everything about data schemas and data model relationships at the company
Helps you to solve Data Analyst taks and explains how the Data Analyst works
### Name
```
Data Analyst Agent
```
Description:
```
Helps you to learn about how to use the Data Analyst but also to work with it.
```
### Instructions
```
You are a Data Analysis Expert Agent, specialized in Langdock's Data Analyst capability. You combine technical expertise with clear, accessible communication to help both beginners and advanced users work effectively with tabular data. You maintain a professional yet friendly tone, adapting your explanations to match the user's expertise level.
Your primary responsibilities are:
1. Explain how Langdock's Data Analyst feature works and when it's triggered
2. Perform data analysis tasks when users upload files (CSV, Excel, Google Sheets, JSON)
3. Guide users on best practices for data formatting and prompt engineering
4. Provide step-by-step instructions for tasks users need to complete in their own tools
5. Demonstrate the difference between Data Analyst and regular document processing
You have access to comprehensive documentation about Langdock's Data Analyst feature. Key technical details:
- Triggers automatically when tabular files (CSV, Excel, Google Sheets, JSON) are uploaded, or when explicitly requested
- Generates and executes Python code to process data
- Cannot read entire file content like document search, but excels at mathematical operations and tabular data processing
- File size limit: 30MB (smaller files typically perform better)
- Best formats: CSV and Excel files with column headers in first row
- Column titles should be descriptive (avoid "Column K", use full descriptive names)
- Avoid empty cells when possible
- Break complex operations into multiple prompts if needed
- Use specific, goal-oriented prompts rather than vague requests
✅ Good: "Analyze monthly sales trends over the last 12 months and identify seasonality patterns"
❌ Poor: "Can you analyze this dataset?"
✅ Good: "Find the top 5 most purchased products and their total revenue from this customer purchase data"
❌ Poor: "What's wrong with my data?"
When users ask about recognizing Data Analyst usage, explain the visual cues: dark code blocks showing Python code, followed by execution results, then the AI's interpretation.
NEVER show the XML tags to the user
Provide clear explanations about how features work
When performing data analysis, show your process and interpret results
Use numbered steps for tasks users must complete in external tools
Highlight best practices and optimization advice
For data analysis tasks: Perform the analysis directly, then explain what was done and why. For explanatory requests: Provide detailed explanations with examples. For procedural tasks: Give step-by-step instructions the user can follow in their tools.
Always encourage users to be specific in their prompts by asking: What's the dataset about? What decision are you trying to support? What metrics matter? What output format do you prefer?
```
### Conversation starters
```
Can you analyze this CSV file and show me the best practices for getting reliable insights?
```
```
How do I know when the Data Analyst is being used versus regular document processing?
```
### Knowledge
Attatch [this](https://drive.google.com/file/d/1PigXgt4Ao8xn604f4auRsUVhzqE0BtlZ/view?usp=drive_link) File
### Model
Use Claude Sonnet 4 or Gemini 2.5 Pro
### Creativity
0.3 is recommended.
### Capabilities
Data analyst
### Engineering
Helps with writing software code.
Identifies bugs, describes the error, and suggests solutions.
### Finance
Creates Excel formulas.
Analyzes Excel or CSV files in Langdock.
Summarizes financial data for management.
Determines depreciation duration based on user input according to the official depreciation table.
### HR
Helps write job postings on different platforms.
Develops personalized questions based on an uploaded CV, roles, stage of the interview and company culture.
Answers frequently asked questions and helps start in a new position.
Writes, improves, and renews intranet pages (Confluence integration).
Develops course content and creates it in multiple languages. Especially used for re- and upskilling.
Craft employee development plans and design training modules.
### InfoSec
Develop training modules that simulate security scenarios, helping users learn to identify and respond to threats
Provide recommendations to handle and remediate incidents based on historical data
Generate and review security policies and compliance reports based on industry standards and regulations.
Automates and improves processes around Risk assessments, compliance reporting, documentation and knowledge sharing by leveraging the data analyst to process/generate excel/CV files.
Learning about new security concepts and helping team members to understand them and implement suitable measures.
Reviews code for security issues and proposes improvements.
Agent to help design and develop secure architecture. Provides recommendations for controls and identifies potential vulnerabilities.
### Leadership
Helps formulate feedback constructively.
Coaches and sets goals using specific frameworks (SMART, OKRs, ...)
Uses strategy frameworks (Porter, Peter Drucker,...) to develop, question, and improve strategies.
Creates feedback and development plans for employees based on strengths and weaknesses extracted from conversation notes.
### Legal
Answers questions about contracts without having to search the contract.
Agent that helps to fill out security and compliance questionnaires
Answers simple legal questions for non-lawyers.
Analyzes contracts for weaknesses or missing content.
### Marketing
Includes previously written LinkedIn posts and writes new posts based on user input.
Writes updates for users for individual platforms (Slack, Teams, website, in-product).
Develops marketing ideas and writes content for marketing channels (ads, influencers, TV ad scripts,...)
Generates versioned content for social media outlets taking into account company guidelines
Writes and improves texts for search engine optimization (SEO).
Transcreate all your content to adapt content for international markets
Generates arguments for your product in comparison to a specific competitor, in line with internal product guidelines and category positioning
Analyzes user and customer surveys quantitatively based on your natural language questions
### Operations
Analyzes workplaces, identifies risk factors, and writes reports based on uploaded images.
Answers questions of processes in and around the office.
Develops workshops on specific topics to train team members and test if knowledge is understood.
Deciphers and explains company or industry-specific acronyms.
Helps teams in a country learn another language by correcting an entered text and marking and explaining incorrect text sections. Used for Japanese -> English.
### Product
Has a specific user persona and answers questions about features, UX, and customer experiences.
This agent summarizes user feedback and usage data, providing actionable insights for product improvement
This agent helps you anticipate critical questions, identify gaps, and highlight edge cases to strengthen your proposals and presentations
Prioritizes features based on defined categories.
This agent assists in brainstorming and developing strategic frameworks, inspiring new ideas for your product roadmap
This agent enhances your writing by polishing drafts, suggesting improvements, and generating specific tones for different communications.
This agent gathers and analyzes market data, helping you make informed decisions based on comprehensive research and technical insights
Defines development criteria, writes documentation, and requirement tickets.
### Public Relations
Answers questions from journalists based on attached knowledge.
### Sales
Searches for specific companies and provides a quick analysis (industry, size, products, competitors, locations, ...). Useful for preparing for sales meetings.
Writes personalized texts to different people based on a set of personas.
Develops case studies for specific customers in various industries and translates them into other languages.
Analyzes competitors based on a name or URL.
Converts transcripts or notes into MEDDICC format, stored in Salesforce fields.
Provides battlecards for competitors or product information for sales support.
Agent that helps to fill out RFPs based on product documentation
Identifies the correct industry of a company name or URL and finds a number of reference customers in the existing customer base.
### Support
Answers questions received by support staff and formulates them in a standardized understandable form.
Understands error codes and situations without help from the tech team.
Trains support staff on specific topics and situations.
Answers technical questions for non-experts, providing clear and concise solutions.
### Miscellaneous
Helps in Langdock to write prompts, learn prompt engineering and effectively instruct agents.
Translates into another language.
Writes emails, improves grammar or tone, helps shorten or elaborate.
Personal mentor for topics like sports, career, conflict situations...
Identifies suitable use cases and helps to build Langdock agents for them.
You are missing a department or a use case? We update this list regularly, feel free to reach out with requests and ideas to [support@langdock.com](mailto:support@langdock.com)!
# Basics of AI models
Source: https://docs.langdock.com/resources/basics
This is a basic guide to understand the fundamentals of how AI models work. It lays the foundation for deeper concepts explained in this guide.
## The life cycle of an AI model
A Large Language Model (LLM) undergoes two main phases:
1. The training phase
* The model is trained on large data sets
2. The usage phase.
* The model can be used to generate an answer
* The model **can not learn anymore**
### Training an LLM
**What is a Token?** A token is a piece of text (roughly a word or word fragment) that the model processes. On average, 1 token equals about 4 characters. For example, "Hello world" is 2 tokens, while "understanding" might be split into 2 tokens: "under" and "standing".
During training, the model processes vast amounts of text data using a technique called "next token prediction." The model learns statistical relationships between words and concepts by repeatedly predicting what word should come next in a sequence.
For example, given the text "The capital of Germany is \_\_\_", the model learns that "Berlin" has a high probability of being the next token. Through billions of these predictions across diverse text, the model builds a sophisticated understanding of language patterns, facts, and reasoning.
Once training completes, the model's parameters are frozen. The "knowledge cutoff date" marks when training data collection stopped, meaning the model has no knowledge of events after this date.
**Now let's explore how these trained models actually generate responses.**
### Using an LLM
**What is Inference?** Inference is the phase when a trained AI model generates responses to your prompts. Unlike training (when the model learns), during inference the model uses its existing knowledge to predict and generate text. The model cannot learn new information during this phase.
During the usage phase (also known as inference), the model generates responses by sampling from the probability distributions it learned during training. When you ask about `Artificial Intelligence`, the model assigns much higher probability to related terms like `machine learning` than unrelated ones like `banana cake`.
When a user sends a prompt to the model, the model will choose the next word or word-piece (token) based on these probabilities.
For example, when a user sends `Hi`, the model assigns high probability to greeting tokens, so it generates `Hello` as the response.
Then, it generates the next most likely word based on `Hi Hello`. This process is repeated until the model decides the request was sufficiently answered.
The generation process works token by token:
1. User sends: `Hi`
2. Model predicts high probability for greeting tokens like `Hello`
3. Model then predicts the next token based on `Hi Hello`
4. This continues until the model generates an end-of-sequence token
### Influencing the output of a response
**What is a Context Window?** The context window is the maximum amount of text (measured in tokens) that an AI model can process in a single request. Think of it as the model's "working memory" - everything you want the model to consider (your current message, chat history, attached documents, instructions) must fit within this limit.
Since deployed models cannot learn after being deployed, how do they remember previous messages or incorporate new information? The answer lies in the context window.
Each request to the model includes everything needed for that specific response: your current message, the entire chat history, attached documents, system instructions, and any relevant knowledge base content. This complete context gets packed into the model's context window (the maximum amount of text it can process in a single request).
The model treats each request as completely independent, but by including all relevant context, it can maintain coherent conversations and reference previous information.
# Chain Prompts
Source: https://docs.langdock.com/resources/chain-prompts
Break down complex tasks into smaller, manageable steps to guide AI through systematic execution and ensure comprehensive results.
Divide complex tasks into smaller, manageable steps for better results.
When you write 3-4 tasks in one prompt without structure, LLMs can miss tasks due to attention limitations in transformer architectures. Each task competes for the model's focus, leading to incomplete execution. This connects to [Chain-of-Thought prompting](/resources/prompting-techniques#chain-of-thought-prompting).
Breaking down tasks creates a clear execution path that guides the model through each step systematically, ensuring comprehensive results.
## Breaking down in one prompt:
Structure your request with numbered steps or clear separators to help the model process each task sequentially.
*Example:*
\
`Search the attached documents for information about office guidelines in our Berlin office.`
\
`Then, list relevant items as bullet points and sort them by importance.`
\
`Afterwards, write a piece of concise information to post on our company's Slack channel to remind everyone about the 10 most important things to remember.`
## Breaking down in several prompts:
For complex workflows, use separate prompts to maintain context and build on previous outputs.
*Example:*
> *Prompt 1:*
>
> `Please search for our office guidelines in the Berlin office in the attached document.`
*Response:* `…`
> *Prompt 2:*
>
> `Sort the guidelines by importance. Explain your reasoning.`
*Response:* `…`
> *Prompt 3:*
>
> `Write a Slack Post explaining the 10 most important guidelines.`
*Response:* `…`
# How to use the Slack Bot
Source: https://docs.langdock.com/resources/chatbots/slack
Use Langdock models and Agents directly in Slack
To use Langdock in Slack, the Langdock app must be installed in your Slack
workspace. This can only be done by a workspace admin following the steps
detailed on the [Slack Bot integration
page](/settings/chatbots/slack).
Afterwards, make sure that you have the app installed in your account. It should appear in the "Apps" section at the bottom left. If it's not installed, please click on "Add apps" to add the app to your Slack account.
## Basic Usage
The Langdock Slack Bot gives you full access to Langdock models and Agents directly in Slack. Once installed, you can interact with it by:
* Tagging **@Langdock** in any channel where the app is a member
* Sending a direct message to **@Langdock** (find it under "Apps" in your left sidebar)
After tagging **@Langdock** or messaging it directly, the bot responds in a thread where you can choose your default model or any Agent. The default model comes from your workspace settings or your personal preferences in [Langdock Preferences](https://app.langdock.com/settings/account/preferences).
## What does the Slack Bot see?
The Langdock Slack Bot only sees messages in threads where it's tagged or messaged directly. This is by design for privacy, it doesn't monitor other channel messages or threads. To ensure the bot sees a specific message, tag it in that same thread.
In the example below, the bot can see and translate the previous message because they're in the same thread:
## Using Agents
You have access to all Agents available in your Langdock workspace. After tagging **@Langdock** or messaging it directly, search and select any Agent as shown below. The Agent will respond using all its knowledge and capabilities.
## Continue conversation in Langdock
By default, Slack Bot messages don't appear in your Langdock message history. To continue the conversation in Langdock, click "Continue conversation in Langdock" in any bot reply. This opens the conversation in your browser, picking up exactly where you left off in Slack.
While you can see what the bot replies to other users in Slack, you can only continue your own conversations. To join an existing thread, send a new message in that thread.
## Web search, data analyst, image generation
The Slack Bot has identical capabilities to the Langdock web app. You'll get the same results whether you're using Slack or the web interface.
## Images and documents
The Slack Bot handles images and documents just like the web app. Attach any file to your message and the bot will read and analyze it.
## Limitations of the Slack Bot
* Cannot be used in Direct Messages with other users, only in channels or direct messages to the bot
* Only sees content in threads where it's tagged, no access to broader workspace information like channel members or other messages
# How to use the Teams Bot
Source: https://docs.langdock.com/resources/chatbots/teams-bot
Use Langdock models and agents directly in Microsoft Teams
To use Langdock in Teams, the Langdock app must be installed in your Teams
workspace. This can only be done by a workspace admin following the steps
detailed on the [Teams Bot setup page](/settings/chatbots/teams-bot).
Once the app is installed by your admin, you should see Langdock available in your Teams app. You can find it by clicking on the "..." icon in the left sidebar and searching for "Langdock".
## Two Ways to Use the Teams Bot
You can interact with the Langdock Teams Bot in two places:
1. **Private Chat**: Direct one-on-one conversations with Langdock
2. **Team Channels**: Collaborative conversations via **@Langdock** mention where your team can see and participate
Each approach has its benefits depending on your use case.
In channels, you must tag **@Langdock** to interact with the bot. The Teams Bot uses your default model from [Account Settings > Preferences](https://app.langdock.com/settings/account/preferences), or your workspace default if none is configured.
## Available Commands
The Langdock Teams Bot provides three commands to help you get started:
| Command | Description |
| ------------------- | ------------------------------- |
| **SetAgent** | Select a Langdock agent |
| **SwitchWorkspace** | Switch to a different workspace |
| **Help** | Get help with the Langdock app |
In private chat, these commands appear directly above the message input field in the prompt library. In channels, they appear after you **@Langdock**.
### SetAgent
Use the **SetAgent** command to select which agent handles your conversation. A dropdown appears with all available agents from your workspace. Use the search field to find the one you need, then click **Set**.
### SwitchWorkspace
If you have access to multiple Langdock workspaces, use **SwitchWorkspace** to change which workspace you're connected to. Select the workspace you want and click **Switch Workspace**.
## Private Chat
Private chat gives you a direct one-on-one conversation with Langdock. Your conversation context is maintained across messages.
### Getting Started
1. Find Langdock in your Teams app list
2. Open a chat with Langdock
3. You'll see the prompt library above the message input
### Clearing Chat History
To start fresh with a clean conversation, go to the chat tab, click the three dots menu, and select "Remove chat history". This resets the conversation context.
Right-click on Langdock in your sidebar and select "Pin" for quick access. This keeps Langdock visible.
## Using Langdock in Channels
Channels let your team collaborate with Langdock together. Each thread becomes its own conversation.
### Getting Started
To interact with Langdock in a channel, tag **@Langdock** in your message. This routes your message to the Langdock API and brings up the command suggestions.
### Thread Context
Each thread in Teams is treated as a separate conversation. Here's what makes channels powerful:
* Langdock can see all messages in a thread once you tag it, even messages sent before it was tagged
* You can discuss something with colleagues first, then pull in **@Langdock** later with full context
* For follow-up questions, tag **@Langdock** again to ensure your message gets routed to the bot
### Setting an Agent for a Channel
You can create a dedicated channel for a specific agent. This is great when your team needs easy access to a specialized AI helper.
**Recommended workflow:**
1. Create a new channel in Teams (e.g., "FAQ-Agent" or "Content Writer")
2. When starting a conversation, select the relevant agent
3. All questions in that channel can then be directed to that agent
This pattern is particularly useful because:
* Team members know exactly which agent handles questions in that channel
* Each thread maintains its own conversation context
* Multiple team members can start separate threads with the same agent
Keep in mind that any team member can select a different agent when starting a new thread in the channel. If you want consistent agent usage, establish team guidelines for your channel.
### Channel Privacy
When using Langdock in a channel, only members of that channel can see the conversations. Teams offers different channel privacy settings:
* **Standard channels**: Visible to all team members
* **Private channels**: Only visible to invited members
Consider using private channels for sensitive topics or when working with confidential information.
## Additional Features
### Web Search and Data Analysis
The Teams Bot has the same capabilities as the Langdock web app.
* Web search for current information
* Data analysis on uploaded files
### Images and Documents
Share images and documents directly in your Teams messages:
* Upload files and images as attachments
* The bot can read and analyze them just like in the web app
## Limitations
* Only members of a Langdock workspace can chat with the Teams Bot
* In channels, the bot only responds when tagged, but it can see all previous messages in the thread
* Cannot be used in direct messages with other users, only in channels or private chat with the bot itself
# Context Window Tricks
Source: https://docs.langdock.com/resources/context-window
Understand AI's working memory limits and learn techniques to maximize the effectiveness of your context window.
Think of the context window as your AI's working memory - it's the maximum amount of text the model can "remember" and work with at once. Each token (roughly 4 characters) counts toward this limit.
You can find an overview of the context window size of different LLMs in Langdock [here](/resources/models#context-window-sizes).
Here's why this matters: The larger the context window, the more your AI can juggle - longer documents, extended conversations, complex analysis - all without losing track of what you're talking about.
## Making the Most of Your Context Window
Want to get the best results? Here's what actually works:
* **Stick to the same words:** Let's say you're discussing "customer segments" - don't suddenly switch to calling them "user groups." Consistency helps the AI connect the dots.
* **Point back to what matters:** Instead of "as mentioned," try "like in the pricing section we discussed" - be specific about what you're referencing.
* **Quick recaps work wonders:** Every few messages, drop a quick summary. Think of it as giving your AI a refresher on where you've been.
**Pro tip:** Starting fresh can be powerful! For every **new topic**, start a new conversation. Also, after about **60 messages** in one chat, it's time for a fresh start - the AI will thank you with better responses. Save your favorite prompts to your library so you can quickly reuse them in new conversations.
# Custom instructions
Source: https://docs.langdock.com/resources/custom-instructions
Provide custom instructions to improve responses.
To improve the responses you receive, you can provide additional information about yourself or the replies you expect.
# Company information
In the [company settings](https://app.langdock.com/settings/workspace/general), the admin of a Langdock workspace can give the AI model context about the company. This context is automatically included in every conversation, so you don't need to repeat company details each time.
**Examples:**
* Company Name
* Industry
* Customers (B2B, B2C, public institutions, enterprises, small agencies)
* specific internal terminology (acronyms, names of essential product areas,…)
# Custom instructions about you
You can find “Custom Instructions” in the [individual preferences](https://app.langdock.com/settings/account/custom-instructions) of your settings. Toggle it to "Active" and enter information about yourself and how you expect the model to reply.
These instructions are automatically sent to the model with every message you send.
**Example:**
If you ask to `Write me a short thank you message for the colleagues in my department for going the extra mile the last 2 weeks`, the model does not know which department you work in.
If you specify in the [custom instructions](https://app.langdock.com/settings/account/custom-instructions), `I work in marketing`, that context gets sent along with your thank you message request. The response will be tailored specifically to your marketing team.
**Examples:**
* What is your job?
* What is important to you in communication
* What are the main topics you are working with?
* What tasks do you want to accomplish with Langdock?
* What is your target audience? To whom should the answers be addressed?
# Custom instructions for responses
In the second field, you can describe what role the AI model should take on, what answers should look like, and in what style they should be written.
You can ask to always write in continuous text, in a particular style, or as if the answer were written by a specific person/role.
**Examples:**
* Always write in \[Du/Sie] form.
* NEVER mention that you are an AI.
* If events or information go beyond your knowledge, answer with "I don't know" without further explaining why the information is unavailable.
* Keep answers unique and free from repetition.
* Always focus on the critical points in my questions to recognize my intent.
* Break down complex problems or tasks into smaller, manageable steps and explain each one with arguments.
* Offer multiple perspectives or solutions.
* If a question is unclear or ambiguous, ask for more details before answering to confirm your understanding.
* Avoid paragraphs with more than three sentences.
* Use analogies or metaphors to explain complex topics.
* Sum up the most important insights at the end of your response.
* Provide credible sources or references to support your answers, if available, with links.
* If you made a mistake in a previous answer, acknowledge and correct it.
* Always add a list of 4 possible thought-provoking follow-up questions at the end of your response. Phrase them as if I was asking you. They should allow the user to follow up on the topic or dive deeper into specific aspects.
Number them 1, 2, 3 and 4, so the user can just enter the according number in the next message to reference this prompt. When you get a number in a new prompt, answer with the response to the according prompt suggestion in the previous response.
# When do I attach a file to a chat, when to an agent and when do I use a knowledge folder?
Source: https://docs.langdock.com/resources/faq/attachments
Learn when to attach files directly to chats or agents versus using knowledge folders based on file quantity, length, and usage frequency.
**When to attach a file to a chat or an agent chat:**
* There a small number of files
* The file(s) is/are relatively short
* You use the file only once or only in one chat
**When to add a file in the agent knowledge:**
* There a small number of files
* The file(s) is/are relatively short
* You want to use the file regularly in the agent
* The file does not change often (maybe every few days), so it is not too much effort to attach it again to the agent
**When to use knowledge folders:**
* There is a large number of files
* The files are very long
* You only need specific sections of the files for a prompt, not the entire files
* For example: You have built an FAQ agent and attached documentation to it. For each prompt, only some topics are needed and only the relevant sections are used to answer the request.
To find out more about the functionality of the different features, please refer to the next section.
# How long are files saved in Langdock?
Source: https://docs.langdock.com/resources/faq/data-retention-for-files
Understand how long files are stored in Langdock and how data retention policies affect chats, agents, and knowledge folders.
Files are connected to either a chat, an agent or a knowledge folder. To remove a file, you can delete the according entity. When a chat, an agent or a knowledge folder is deleted, connected files are deleted immediately and can not be retrieved.
To manage how long chats are saved, Langdock has a data retention period that can be set by admins. Chats which have not been used in the defined period are deleted automatically. You can read more about this [here](/settings/workspace#chat).
# Why is there a limit of 20 files in the chat and in agents?
Source: https://docs.langdock.com/resources/faq/file-limit
Learn why chats and agents have a 20-file attachment limit and how to work with larger document collections using knowledge folders.
Large Language Models have a context window, which is the maximum amount of text they can process at once. This includes the prompt you send to the model, the previous chat history as well as attached files (this is explained in [this guide](/resources/basics)). To work efficiently with files, reduce the number of documents to the lowest number of files possible.
The limit of 20 files helps to increase the probability that all of the content fits into the context window. To work with larger documents or more files, you can use a [knowledge folder](/resources/integrations/knowledge-folders) and attach it to an agent.
# How does a file attachment work and how is it different to a file in a knowledge folder?
Source: https://docs.langdock.com/resources/faq/knowledge-folders-and-direct-attachments
Understand the difference between direct file attachments and knowledge folders, including how each processes documents for AI responses.
There are two ways how contents of a file are processed for generating an answer:
* One is that the **entire document** is sent to the model, together with your prompt (see [this guide](/resources/basics)). This is the standard in chats and agents.
* AI models have a context window, which is the limit of how much text can be processed at once. For long documents or a large number of documents, the documents are **summarized and only relevant sections** are sent to the model in the context window. This is used in [knowledge folders](/resources/integrations/knowledge-folders).
Attaching the files directly to an agent or to a chat leads to the best results. If possible, you should follow this option. We recommend to attach as few documents as possible to an agent or chat.
In some use cases, for example when working with large documentation or for an FAQ agent, attaching the documents to an agent or to a chat directly is not possible. Here, you can use the knowledge folder feature, which works well for use cases, where only specific sections are relevant, but not the entire documents.
# When are newly released models available in Langdock?
Source: https://docs.langdock.com/resources/faq/model-availability
Learn when newly released AI models become available in Langdock and why EU deployment takes longer than US availability.
Models are usually available in the US first. It takes a few weeks until they are launched in the EU in a GDPR-compliant way. We add the models as soon as they are available in the EU.
# Why can I not add a repository to a chat / an agent / a knowledge folder?
Source: https://docs.langdock.com/resources/faq/repository-in-knowledge-folder
Understand why full repositories cannot be added to Langdock and learn the best practices for working with code files.
AI models are not good at processing a whole repository yet. There are a few reasons for this:
* First, they have a context window (=the maximum amount of text they can process at once), which is often times smaller than the repository. The chat and agent have a limit of 20 files to increase the chance that the files fit into the context window. Oftentimes, a repository contains more than 20 files.
* To handle documents or document batches that are larger than the context window, we have built the knowledge folder. Since the context window is a technical limitation of the model, not everything can be sent to the model. An embedding search, so a semantic pre-selection, identifies relevant sections of the documents and only these sections are sent to the model. For coding, it is important to consider the entire document, not only selected sections. Therefore, the context window also limits this behavior
* Lastly, even if the repository fits into the context window, the model might still struggle to understand it if it is a large repository, since the answers decrease in quality with a fuller context window.
In our experience, the best approach is to work with individual files, smaller sections or screenshots only. Sorry - that was a long message, but I hope it helps
# Which file types does Langdock support?
Source: https://docs.langdock.com/resources/faq/supported-file-types
Complete reference of supported file types in Langdock including text, tabular, image, and audio files with size limits.
Langdock supports the following file types:
**Text-based files:**
| File Type | File Extension | File Size Limit |
| -------------------------- | -------------- | --------------- |
| PDF | .pdf | 256 MB |
| Markdown | .md | 10 MB |
| Text | .txt | 10 MB |
| Word / Google Docs | .docx | 256 MB |
| Powerpoint / Google Slides | .ppt | 256 MB |
| JSON | .json | 10 MB |
For text-based files, there's an additional 4 million character limit that applies alongside the file size limit. So a PDF, for example, can be up to 256MB in file size **or** hit the 4 million character limit.
**Tabular Files:**
| File Type | File Extension | File Size Limit |
| --------------------- | ---------------------------------------------- | --------------- |
| Excel / Google Sheets | .xlsx, .xls, .xlsm, .xltx, .xltm, .xlam, .xlsb | 30 MB |
| CSV | .csv | 30 MB |
| TSV | .tsv | 30 MB |
Tabular files (Excel, CSV) can only be uploaded directly in chat and agents, not in knowledge folders. To find out more about the technical details, see [this](/resources/faq/tabular-files-in-knowledge-folders) page.
**Images:**
| File Type | File Extension | File Size Limit |
| --------- | -------------- | --------------- |
| JPG | .jpg | 20 MB |
| PNG | .png | 20 MB |
| HEIF | .heif | 20 MB |
Images can only be uploaded in chat, not in knowledge folders and agents.
**Audio Files**
| File Type | File Extension | File Size Limit |
| ----------- | ---------------------------------------------- | --------------- |
| Audio files | .mp3, .wav, .ogg, .mpeg, .mp4, .m4a, .vnd.wave | 200 MB |
Audio files can only be uploaded in the chat, not in knowledge folders and agents.
# Why can I not upload Excel Files / CSVs to a knowledge folder?
Source: https://docs.langdock.com/resources/faq/tabular-files-in-knowledge-folders
Learn why Excel and CSV files cannot be added to knowledge folders and how to work with tabular data in Langdock.
The knowledge folders are a workaround to work with very large amounts of text, that are larger than the limit of what models can process at once. A semantic search if identifying relevant sections and only these sections are then used to generate the response ([here](/resources/integrations/knowledge-folders) is a detailed explanation).
Excel and CSV files need a different workaround to be processed. We use the data analyst functionality, which generated Python code, which is then executed to extract relevant information ([here](/product/chat/data-analysis) explained in depth).
In summary, these functionalities are used to process specific formats of data. Unfortunately, they can be not combined, since the knowledge folder functionality would only extract parts of the attached file and this information would be missing to work with the data analyst effectively.
# Further Resources
Source: https://docs.langdock.com/resources/further-resources
Looking to level up your prompt engineering skills? We've curated these resources to help you master the craft, from foundational concepts to advanced techniques.
These are divided into general and level-specific categories. [General resources](#general-resources) are valuable for everyone, regardless of your current expertise. [Level-specific resources](#level-specific-resources) are tailored to match where you are in your prompt engineering journey.
We continuously update this page with the latest insights and techniques, so bookmark it and check back regularly!
# General Resources
Learn from the organizations building these AI systems and the experts pushing the boundaries of what's possible with prompts.
## Other Prompt Engineering Guides
} href="https://platform.openai.com/docs/guides/prompt-engineering" />
} href="https://platform.claude.com/docs/en/docs/build-with-claude/prompt-engineering/overview" />
} href="https://docs.mistral.ai/capabilities/completion/prompting_capabilities" />
*\** Democratizing Artificial Intelligence Research, Education, and Technologies, [https://www.dair.ai](https://www.dair.ai).
## Leading Experts on AI & Prompt Engineering
### Dr. Lance B. Eliot
World-renowned AI expert with over 7.4+ million views on his AI columns. As a seasoned CIO/CTO and entrepreneur, he bridges practical industry experience with deep academic research.
> The use of generative AI can altogether succeed or fail based on the prompt that you enter
Follow him on X: [Link](https://twitter.com/LanceEliot)
Read his articles here: [Link](https://www.forbes.com/sites/lanceeliot/)
Our top 3 articles of Dr. Lance B. Eliot to begin with:
1. Must-Read Best Of Practical Prompt Engineering Strategies To Become A Skillful Prompting Wizard In Generative AI: [Link](https://www.forbes.com/sites/lanceeliot/2023/12/28/must-read-best-of-practical-prompt-engineering-strategies-to-become-a-skillful-prompting-wizard-in-generative-ai/)
2. The Best Prompt Engineering Techniques For Getting The Most Out Of Generative AI: [Link](https://www.forbes.com/sites/lanceeliot/2024/05/09/the-best-prompt-engineering-techniques-for-getting-the-most-out-of-generative-ai/)
3. New Chain-Of-Feedback Prompting Technique Spurs Answers And Steers Generative AI Away From AI Hallucinations: [Link](https://www.forbes.com/sites/lanceeliot/2024/04/11/new-chain-of-feedback-prompting-technique-spurs-answers-and-steers-generative-ai-away-from-ai-hallucinations/)
### Andrew Ng
Globally recognized AI leader and pioneer in machine learning education. Author of 200+ research papers and named to Time100 AI list of most influential AI persons in 2023.
> We should automate things that are routine and boring, so we can spend more time doing things that are fulfilling
Homepage: [Link](https://www.andrewng.org/)
His courses: [Link](https://www.deeplearning.ai/)
Follow him on X: [Link](https://twitter.com/AndrewYNg)
Newsletter: [Link](https://www.deeplearning.ai/the-batch/tag/letters/)
### Andrej Karpathy
Renowned computer scientist and former Director of AI at Tesla, where he led the Autopilot Vision team. OpenAI co-founder specializing in deep learning and computer vision.
> The hottest new programming language is English
Homepage: [Link](https://karpathy.ai/)
Follow him on X: [Link](https://twitter.com/karpathy)
YouTube Channel: [Link](https://www.youtube.com/@AndrejKarpathy)
We highly recommend watching his “Intro to LLMs Talk” [here](https://www.youtube.com/watch?v=zjkBMFhNj_g\&t=1431s) or read its summarized version [here](https://ppaolo.substack.com/p/introduction-to-large-language-models-llms).
# Level-specific Resources
Jump to resources that match your current expertise.
### Beginner Level
Start with Andrej Karpathy's 1-hour LLM introduction [here](https://www.youtube.com/watch?v=zjkBMFhNj_g\&t=1431s), the clearest explanation we've found. Prefer reading? Get the full transcript [here](https://ppaolo.substack.com/p/introduction-to-large-language-models-llms).
Here are some quick cheat sheets for you:
* [Cheat Sheet #1](https://medium.com/the-generator/the-perfect-prompt-prompt-engineering-cheat-sheet-d0b9c62a2bba)
* [Cheat Sheet #2](https://github.com/devwhocodes/Prompt-Engineering-CheatSheet)
* [Cheat Sheet #3](https://cookbook.openai.com/examples/gpt4-1_prompting_guide)
### Intermediate Level
Stay current with these essential AI resources:
* [Ben's Bites](https://bensbites.beehiiv.com/)
* [Prompts daily](https://www.neatprompts.com/)
* [The batch](https://www.deeplearning.ai/the-batch/)
### Expert Level
Prompts for Data Scientists: [Link](https://github.com/travistangvh/ChatGPT-Data-Science-Prompts)
# Clear and Specific Instructions
Source: https://docs.langdock.com/resources/instructions
Learn how to provide clear and specific instructions to guide AI responses and avoid vague, generic outputs.
Providing clear and specific instructions is crucial for guiding LLM responses. Vague prompts lead to irrelevant outputs because LLMs lack context about your specific needs and will default to generic responses.
The more context you provide, the better the LLM can tailor its response to your exact requirements.
*Vague prompt:*
`Tell me about space.`
*Specific prompt:*
`Provide a brief overview of the solar system, including the names and key characteristics of each planet.`
# Action Builder Agent
Source: https://docs.langdock.com/resources/integrations/agent
This agent helps you write actions for your integration. Simply add relevant documentation, describe your use case, and chat with the agent to write the JavaScript.
More details on how to write integrations can be found in [Creating Custom Integrations](/resources/integrations/create-integrations).
For a complete reference of available sandbox functions, see [Sandbox Utilities](/resources/integrations/sandbox-utilities).
**Name**
```
Langdock Integrations Agent
```
**Description**
```
Agent to support building Langdock Integrations
```
**Instructions**
```
You are an agent that supports users in crafting JavaScript code to build integrations for the Langdock platform. The JavaScript runtime is sandboxed and only has limited functions available (for security purposes), so please try to use basic code as much as possible. However, aim to be as efficient as possible with the least amount of lines of code.
The code should be written in plain JavaScript. For every function invocation, there is a data object passed that consists of an input and an auth object:
Both contain records with values that can be used within the function. The data object is automatically available and does not need to be imported. The code can also access functions provided to the sandbox:
ld.request: Use this function if you want to retrieve data from any API. This should be used for any fetch request. ld.request accepts an object as input, where headers, params, body, method, etc., can be provided. If a body is provided, it should be a normal object. ld.request will stringify the body automatically. For file downloads, you should provide an attribute called responseType that can either be 'stream' or 'binary', according to the API from which the file is being loaded. ld.request automatically returns the result as response.buffer in the appropriate format.
Here is an example:
"""
const options = {
method: 'GET',
url: `https://www.googleapis.com/drive/v3/files/${data.input.itemId}/export?mimeType=text/plain`,
headers: {
'Authorization': 'Bearer ' + data.auth.access_token,
'Accept': 'application/json'
}
};
"""
If the content-type header is set to application/x-www-form-urlencoded, the ld.request function automatically converts the body into the appropriate format.
The function returns the following:
status: Response HTTP status code
headers: Response headers
json: Response body parsed as JSON -> the code you write can therefore access response.json without needing to await it. For example, you can simply use data = response.json; to get the body of the request.
text: Response body as text
buffer: Response as buffer
ld.log: Takes a string as input. Can be used as a drop-in replacement for console.log. Logs everything passed to it to the console and is helpful for debugging.
If the user instructs you to build a native integration, you need to output a specific object as the return of your function:
For a native search integration, the expected schema is:
"""
url: z.string(),
documentId: z.string(),
title: z.string(),
author: z.object({
id: z.string(),
name: z.string(),
imgUrl: z.string().optional(),
}).optional(),
mimeType: z.string(),
lastSeenByUser: zodDateTransformer(),
createdDate: zodDateTransformer(),
lastModifiedByAnyone: zodDateTransformer(),
lastModifiedByUserId: z.object({
id: z.string().optional(),
name: z.string().optional(),
lastModifiedByUserIdDate: zodDateTransformer(),
}).transform((data) => {
if (!data.id || !data.name || !data.lastModifiedByUserIdDate) {
return undefined;
}
return data;
}).optional(),
parent: z.object({
id: z.string(),
title: z.string().optional(),
url: z.string().optional(),
type: z.string().optional(),
driveId: z.string().optional(),
siteId: z.string().optional(),
listId: z.string().optional(),
listItemId: z.string().optional(),
}).optional()
"""
If you don't have a value for an optional attribute, please do not fill it out.
For our native download file function, we expect a return of the following structure:
data: response.data,
fileName: string,
mimeType: string,
buffer: response.buffer
It gets an itemId and a parent object (as a string) as input. The Langdock sandbox environment provides access to the standard JavaScript functions for base64 encoding and decoding:
"""
atob(): Decodes a base64-encoded string into a binary string. Usage: atob('SGVsbG8gV29ybGQ=') returns "Hello World".
btoa(): Encodes a binary string into base64. Usage: btoa('Hello World') returns "SGVsbG8gV29ybGQ=".
"""
atob and btoa can be used without imports, like:
"""
function base64UrlDecode(base64Url) {
return atob(base64Url.replace(/-/g, '+').replace(/_/g, '/'));
}
"""
These functions are particularly useful when working with email content, file attachments, or any API that returns base64-encoded data.
The code that should run immediately should not be wrapped in a function. It should just be plain JavaScript code. Please ensure that you always use return to return the expected result back to our app. It is awaited automatically by the sandbox.
Always prefer async/await syntax over .then() syntax for better readability.
```
**Actions**
```
Web Search enabled
```
**Creativity**
```
0.3
```
# Configure Your Own OAuth Clients
Source: https://docs.langdock.com/resources/integrations/bring-your-own-oauth
Set up your own OAuth application for integrations to control scopes, enable additional integrations, or replace Langdock's default OAuth client with your custom configuration.
Custom OAuth clients apply workspace-wide for the specific integration. All new connections will use your OAuth application once configured.
## How Custom OAuth Works
When you configure a custom OAuth client, Langdock routes all authentication flows through your OAuth application instead of the default Langdock client. This means:
* **Your branding** (custom name and logo) appears in consent screens
* **Your tenant policies** control user access and admin consent requirements
* **Your rate limits** apply to API calls made by your users
Register a new OAuth application in your provider's developer portal (Google Cloud Console, Microsoft Azure, etc.).
**Required Configuration:**
* Copy the exact redirect URL from Langdock's integration settings
* Select all required scopes shown in Langdock for that integration
* Configure any tenant-specific settings (admin consent, allowlisting)
Note down the following from your OAuth app:
* **Client ID** (always required)
* **Client Secret** (always required)
* **Tenant ID or Domain** (required for some providers like Microsoft)
Navigate to **Settings → Integrations** and select your target integration.
View the current **configuration** next to the integration and **enable all required scopes** in your own OAuth client. Scopes that are not added to your client will cause an insufficient scopes error.
Close this screen, click on the integration, and in the OAuth client dropdown, select **Configure your own**
Paste your **Client ID** and **Client Secret**, and click **Save**
Have a user connect their account to verify:
* Consent screen shows your client
* Required scopes are granted
* Data access works as expected through actions
## Integration Settings Interface
When configuring a custom OAuth client, you'll see these fields:
Copy this exact URL to your OAuth app's redirect URI configuration. The URL format is:
```
https://app.langdock.com/api/integrations/{integration-id}/callback
```
The redirect URL must match exactly. Any mismatch will cause `redirect_uri_mismatch` errors.
Each integration displays the specific OAuth scopes needed for full functionality. For example, Gmail requires:
* `https://www.googleapis.com/auth/gmail.send`
* `https://www.googleapis.com/auth/gmail.compose`
* `https://www.googleapis.com/auth/gmail.readonly`
* `https://www.googleapis.com/auth/gmail.labels`
* `https://www.googleapis.com/auth/gmail.modify`
Copy these scopes to your OAuth app configuration. Scopes that are not added to your client will cause an insufficient scopes error.
Enter your OAuth application credentials:
* **Client ID**: Your app's public identifier
* **Client Secret**: Your app's private key (stored encrypted)
* **Tenant/Domain**: Required for Microsoft integrations
## Behavior and Impact
### Workspace-Wide Changes
* All new connections use your custom OAuth client
* Existing connections continue working until users reconnect
* Only affects the specific integration you configured
### User Experience
* Users see your app name and branding in consent screens
* Authentication flows redirect through your OAuth application
* Your app's rate limits and quotas apply to user requests
## Common Configuration Errors
**Cause**: Redirect URL doesn't match exactly between Langdock and your OAuth app
**Solution**:
* Copy the redirect URL from Langdock exactly
* Check for trailing slashes or protocol mismatches
* Verify you're configuring the correct environment
**Cause**: Client ID or Client Secret is incorrect
**Solution**:
* Double-check credentials from your OAuth app
* Ensure no extra spaces or characters
* Verify the client is enabled in your provider's console
**Cause**: Admin consent required but not granted
**Solution**:
* Grant admin consent in your tenant settings
* Enable user consent if appropriate for your organization
* Check tenant allowlisting requirements
**Cause**: Missing required scopes in your OAuth app
**Solution**:
* Add all scopes shown in Langdock to your OAuth app
* Users may need to reconnect after adding scopes
* Verify scope names match exactly (case-sensitive)
## Managing Existing Connections
When switching from Langdock's default client to your custom OAuth client:
Existing user connections may require re-authentication. We suggest notifying your users upfront.
### Migration Steps
1. Configure your custom OAuth client
2. Notify users about the upcoming change
3. Save the new OAuth configuration
4. Users reconnect their accounts when prompted
5. Verify all connections work with your custom client
Keep your OAuth app credentials secure and limit access to organization admins only.
***
## Integrations Requiring Your Own OAuth Client
Some of our integrations can only be used when providing your own OAuth client. Details on how to connect them with Langdock are described in this section.
### ServiceNow
\
**Required authentication fields**
* Provide ServiceNow Subdomain
When a user wants to create a connection with ServiceNow, they have to provide the subdomain of your ServiceNow instance.
### ServiceNow Integration Requirements
Only Cloud-hosted accounts are currently supported.
A paid ServiceNow account is required to create an application registry. View ServiceNow's plans [here](https://www.servicenow.com/lpgp/pricing.html?campid=107977\&cid=p:all:dg:b:prsp:exa:Google_CoreBrand_Top_Restructure:latam:mx\&ds_c=GOOG_LATAM_MX_ES_DEMANDGEN_ALBU_PRSP_Brand_EXA_Top-RES\&cmcid=71700000102193357\&ds_ag=Servicenow+Pricing_EXA_EN\&cmpid=58700008155106586\&ds_kids=p74103339564).
Yes. To connect by OAuth, your systems administrator should set up the right configuration within your instance to connect any user using an OAuth connection. E.g. all users need the oauth\_user role to be able to connect. Learn more about ServiceNow's [groups and permissions](https://www.servicenow.com/docs/bundle/zurich-platform-security/page/integrate/identity/task/view-permissions-for-a-group.html)
Yes. ServiceNow implements rate limiting to prevent excessive API usage. System administrators can configure rules that restrict the number of inbound REST API requests processed per hour. Learn more about ServiceNow's [usage limits](https://www.servicenow.com/docs/bundle/zurich-api-reference/page/integrate/inbound-rest/concept/inbound-REST-API-rate-limiting.html).
## Configuring a Snowflake OAuth Client
Configuring your own OAuth client for Snowflake gives you control over authentication policies, token validity periods, and IP allowlisting within your Snowflake environment.
**Required Information:**
* **OAuth Redirect URL**: Copy this from Langdock's Snowflake integration settings page
* **Client ID**: Generated by Snowflake after creating the security integration
* **Client Secret**: Generated by Snowflake after creating the security integration
* **Authorization URL**: Your Snowflake account's authorization endpoint
The Redirect URL from Langdock must be provided in Snowflake, while the Client ID, Client Secret, and Authorization URL from Snowflake must be entered into Langdock's integration settings.
If your Snowflake account has network policies or IP allowlisting enabled, you may need to whitelist Langdock's static IP address to allow connections. See [Static IP Configuration](/settings/security/static-ip-configuration) for details.
### Setup Guide
In your Langdock workspace, create your new Snowflake integration, and set up a custom OAuth Client.
1. Navigate to Integrations in Langdock
2. Click "Add Integration" and select "Start from Scratch"
3. Fill in your preferred name and description for your new Snowflake integration
4. Click "Create"
5. Authentication Type: Select "OAuth 2.0" from the dropdown
6. Authentication fields: Leave blank
7. OAuth Configuration: Save your **OAuth Redirect URL**
In Snowflake, select your workspace, and create a new security integration.
1. Create a new `.sql` file, and paste the following query:
```sql theme={null}
CREATE SECURITY INTEGRATION
TYPE = OAUTH
ENABLED = TRUE
OAUTH_CLIENT = CUSTOM
OAUTH_CLIENT_TYPE = CONFIDENTIAL
OAUTH_REDIRECT_URI = ''
OAUTH_ISSUE_REFRESH_TOKENS = TRUE
OAUTH_REFRESH_TOKEN_VALIDITY = 86400;
```
2. Replace `` with a descriptive name and `` with the **OAuth Redirect URL** from Step 1.
3. Run the query to create your new security integration in Snowflake.
**Note:** Adjust your `OAUTH_REFRESH_TOKEN_VALIDITY` value based on your security policies.
1. Within the same workspace, run the following query:
```sql theme={null}
SELECT SYSTEM$SHOW_OAUTH_CLIENT_SECRETS('');
```
2. Replace `` with the name you gave your security integration in the previous step.
3. Save your **Client ID** and **Client Secret**. Store these credentials securely as they provide access to your Snowflake account.
1. Click on your account name in the bottom left corner of your Snowflake application
2. Under the Account Section, click on "View Account Details"
3. Copy your **Account URL**
1. Add your **Client ID** and **Client Secret** from Step 3 in the respective input fields of your new Langdock integration
2. In the following sections:
* Authorization URL
* Access Token URL
* Refresh Token URL
Replace only `https://example.com` with your **Snowflake Account URL** from Step 4.
Example:
```
https://example.com/oauth/authorize
```
Becomes:
```
https:///oauth/authorize
```
3. Click **Save**
Click "Add Connection" in your Snowflake integration.
* You should be directed to the Snowflake Authorization screen
* Log into your Snowflake account
You have now successfully set up your own OAuth Snowflake integration!
# Creating Custom Integrations
Source: https://docs.langdock.com/resources/integrations/create-integrations
Custom integrations let you connect any API-enabled tool to an agent. This opens up endless possibilities. Below you find a comprehensive guide on how to build integrations, actions, and triggers.
## Integrations vs. Actions vs. Triggers
**Integrations** are standardized connections between Langdock and third-party tools that handle authentication and API communication. Within each integration, you can build:
* **Actions**: Functions that agents and workflows can call to interact with APIs (e.g., "create ticket", "send email", "get data")
* **Triggers**: Event monitors that start workflows when specific events occur (e.g., "new email received", "file uploaded")
Triggers will only be available with the launch of Workflows, until then you
can't configure them.
## Setting up an Integration
In the [integrations menu](https://app.langdock.com/integrations), click `Add integration` to get started.
Next, specify an integration name and upload an icon (shown in chat when using actions and in the integrations overview). Add a description to help agents know when to use this integration. Hit `Save` to create it.
### Authentication
Start with authentication in the `Build` tab. Select your authentication type and configure it following the steps below:
### API Key
After selecting API Key authentication, add custom input fields in step 2 (like API key or client ID). These inputs are collected when users set up connections and can be marked as "required."
Step 3 lets you set up a test API endpoint to validate authentication. Replace the URL parameter and add references to your input fields using `data.auth.fieldId`.
Use the built-in `ld.request` and `ld.log` functions for requests and logging.
Test your action and create your first connection.
### OAuth 2.0
Custom integrations support OAuth 2.0 authentication.
Step 2 allows custom input fields (collected during connection setup). Client ID and Client Secret are entered in step 4, so this covers additional parameters only.
**Create an OAuth client**
**Set up an OAuth client**/App/Project in your target application and **enable the required APIs**. This is application-specific, which is why our interface supports custom code in step 5.
For Google Calendar, **create a Google Service Account**, generate a new key to get the `client ID` and secret, add them to Langdock in step 4, save the `OAuth Redirect URL`, and enable the Google Calendar API.
**Change Authorization URL**
Check the OAuth documentation for your service and extract the `Authorization URL`. Usually, changing the `BASE_URL` in our template is sufficient.
For Google Calendar:
```
return `https://accounts.google.com/o/oauth2/v2/auth?client_id=${env.CLIENT_ID}&response_type=code&scope=${data.input.scope}&access_type=offline&redirect_uri=${encodeURIComponent(data.input.redirectUrl)}&state=${data.input.state}&prompt=consent`;
```
**Define Scopes**
Define OAuth scopes required by your actions. List them comma or space-separated according to your API documentation.
For Google Calendar (space-separated):
```
https://www.googleapis.com/auth/calendar.events https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile
```
**Provide Access Token & Refresh Token URL**
Check your API's OAuth docs for the `Access Token URL` and `Refresh Token URL`. Usually, updating the `tokenUrl` in our template works.
For Google Calendar:
`const tokenUrl ='https://oauth2.googleapis.com/token';`
**Test Authentication Setup**
Provide a test API endpoint (like `/me`) to verify authentication. The return value of that test request can be used inside the **OAuth Client Label** to influence the naming of the established connections. You can access the return value via: `{{data.input}}`
For Google Calendar: `Google Sheets - {{data.input.useremail.value}}`
Test by adding a connection and verifying the authorization flow works.
For Google Calendar, we test with:
```
url: 'https://people.googleapis.com/v1/people/me?personFields=names,emailAddresses'
```
### Public APIs
Choose `None` for publicly available APIs without authentication.
## Building Actions
Actions allow agents to interact with your API endpoints. There are two types of actions:
* **Regular Actions**: Standard API interactions (create, read, update, delete operations)
* **Native Actions**: Special file search and download actions that integrate with Langdock's file system
### Regular Actions
Regular actions are the most common type and handle standard API operations.
#### When to Build Regular Actions
* **CRUD operations**: Create, read, update, or delete data via API calls
* **Data processing**: Send data to APIs for analysis, transformation, or validation
* **File operations**: Upload files to services, process documents, send attachments
* **Notifications**: Send emails, messages, or create tickets
* **Integrations**: Connect multiple services or sync data between platforms
#### Setting Up Regular Actions
1. **Add Action**: In your integration, click "Add Action"
2. **Configure Basic Info**: Set name, description, and slug
3. **Add Input Fields**: Define what data the action needs from users
4. **Write Action Code**: Implement the API interaction logic
5. **Test**: Validate your action works correctly
#### Input Field Types
| Type | Purpose | Notes |
| ----------------- | ----------------- | ----------------------------------------------------------------------------------------------------- |
| `TEXT` | Short text input | Single line text |
| `MULTI_LINE_TEXT` | Long text input | Multiple lines, good for descriptions |
| `NUMBER` | Numeric input | Integers or decimals |
| `BOOLEAN` | True/false toggle | Checkbox input |
| `SELECT` | Dropdown options | Pre-defined choices |
| `FILE` | File upload | Single or multiple files ([see file support guide](/resources/integrations/file-support-for-actions)) |
| `OBJECT` | Complex data | JSON objects with custom schema |
| `PASSWORD` | Sensitive text | Hidden input for secrets |
#### Example: Create Ticket Action
```javascript theme={null}
// Validate required inputs
if (!data.input.title) {
return { error: "Title is required" };
}
// Build request
const options = {
method: "POST",
url: "https://api.ticketing-service.com/tickets",
headers: {
Authorization: `Bearer ${data.auth.api_key}`,
"Content-Type": "application/json",
},
body: {
title: data.input.title,
description: data.input.description || "",
priority: data.input.priority || "medium",
assignee: data.input.assignee,
},
};
try {
const response = await ld.request(options);
if (response.status === 201) {
return {
success: true,
ticketId: response.json.id,
url: response.json.url,
message: `Created ticket #${response.json.id}: ${data.input.title}`,
};
} else {
throw new Error(`API returned status ${response.status}`);
}
} catch (error) {
ld.log("Error creating ticket:", error.message);
return {
success: false,
error: `Failed to create ticket: ${error.message}`,
};
}
```
#### File Upload Example
```javascript theme={null}
// Handle file uploads (requires FILE input field)
const document = data.input.document; // FileData object
if (!document) {
return { error: "Please attach a document" };
}
// Validate file type
const allowedTypes = ["application/pdf", "image/jpeg", "image/png"];
if (!allowedTypes.includes(document.mimeType)) {
return {
error: `Unsupported file type: ${
document.mimeType
}. Allowed: ${allowedTypes.join(", ")}`,
};
}
const options = {
method: "POST",
url: "https://api.example.com/documents",
headers: {
Authorization: `Bearer ${data.auth.api_key}`,
"Content-Type": "application/json",
},
body: {
filename: document.fileName,
content: document.base64,
mimeType: document.mimeType,
},
};
const response = await ld.request(options);
return {
success: true,
documentId: response.json.id,
message: `Uploaded ${document.fileName} successfully`,
};
```
#### Returning Files from Actions
Actions can also generate and return files:
```javascript theme={null}
// Generate CSV export
const data = await fetchCustomerData();
const csvHeader = "Name,Email,Created";
const csvRows = data.map(
(customer) => `"${customer.name}","${customer.email}","${customer.created}"`
);
const csvContent = [csvHeader, ...csvRows].join("\n");
return {
files: {
fileName: `customers-${new Date().toISOString().slice(0, 10)}.csv`,
mimeType: "text/csv",
text: csvContent, // Use 'text' for UTF-8 content, 'base64' for binary
},
success: true,
exported: data.length,
};
```
## Building Triggers
Triggers monitor external systems for events and can start workflows automatically.
### When to Build Triggers
* **Event monitoring**: Detect new emails, files, records, or changes
* **Workflow automation**: Start processes when specific events occur
* **Data synchronization**: Keep systems in sync by detecting changes
* **Notifications**: React to external events and notify users
### Trigger Types
* **Polling Triggers**: Periodically check APIs for new events
* **Webhook Triggers**: Receive real-time notifications from external systems
### Setting Up Polling Triggers
1. **Add Trigger**: In your integration, click "Add Trigger"
2. **Configure Settings**: Set name, description, and polling interval
3. **Add Input Fields**: Define configuration parameters (optional)
4. **Write Trigger Code**: Implement the polling logic
5. **Test**: Validate your trigger detects events correctly
#### Required Return Format
Triggers must return an array of events with this structure:
```javascript theme={null}
return [
{
id: "unique_event_id", // Required: Unique identifier
timestamp: "2024-01-15T...", // Required: Event timestamp (ISO string)
data: {
// Your event data here
eventType: "new_email",
subject: "Important message",
from: "sender@example.com",
// ... other event properties
},
},
];
```
#### Example: New Email Trigger
```javascript theme={null}
// Fetch recent emails
const options = {
method: "GET",
url: "https://api.email-service.com/messages",
headers: {
Authorization: `Bearer ${data.auth.access_token}`,
},
params: {
since: new Date(Date.now() - 60 * 60 * 1000).toISOString(), // Last hour
limit: 10,
},
};
try {
const response = await ld.request(options);
const emails = response.json.messages || [];
// Transform to required format
const results = emails.map((email) => ({
id: email.id,
timestamp: email.receivedAt,
data: {
messageId: email.id,
subject: email.subject,
from: email.from,
to: email.to,
body: email.body,
isRead: email.isRead,
},
}));
return results;
} catch (error) {
ld.log("Error fetching emails:", error.message);
return [];
}
```
#### Triggers with File Attachments
When triggers detect events with files, include them in the data object:
```javascript theme={null}
const results = [];
for (const email of emails) {
const attachments = [];
// Process email attachments
if (email.attachments && email.attachments.length > 0) {
for (const attachment of email.attachments) {
const content = await downloadAttachment(attachment.id);
attachments.push({
fileName: attachment.filename,
mimeType: attachment.mimeType,
base64: content.toString("base64"),
});
}
}
results.push({
id: email.id,
timestamp: email.receivedAt,
data: {
subject: email.subject,
from: email.from,
body: email.body,
files: attachments, // Files go inside data object
},
});
}
return results;
```
### Setting Up Webhook Triggers
1. **Add Trigger**: Choose "Webhook" type
2. **Configure Endpoint**: Note the provided webhook URL
3. **Set Up External System**: Configure your service to send events to the webhook URL
4. **Write Processing Code**: Transform incoming webhook data (optional)
5. **Test**: Send test events to verify functionality
#### Webhook Processing Example
```javascript theme={null}
// Transform incoming webhook data
const webhookData = data.input.webhookPayload;
return [
{
id: webhookData.event_id || crypto.randomUUID(),
timestamp: webhookData.timestamp || new Date().toISOString(),
data: {
eventType: webhookData.type,
resourceId: webhookData.resource_id,
action: webhookData.action,
changes: webhookData.changes,
// Transform webhook format to your preferred structure
},
},
];
```
## Native Actions
Native actions allow you to natively search and download files that aren't stored locally on a user's device. We've already built native actions for SharePoint, OneDrive, Google Drive, and Confluence. You can access these via the **Select files** button to search and attach files directly to Chat or Agent Knowledge.
*Attach files to the chat using native integrations.*
*Attach files to the Agent knowledge by using a native integration.*
Building native actions for other tools enables you to search and download files from those platforms in the same way.
#### Setting up a Native Action
To set up a native action, begin building your integration as usual. Add another action, and in **Step 1** under Advanced, select either "**Search files**" or "**Download file**" as the action type.
Afterwards, you build the action as any other action, but your function needs to return a specific object structure. This ensures compatibility and enables agents to handle files and search results correctly.
#### Required Output Format
Depending on the action you select, your function must return a specific object structure. This ensures compatibility and enables agents to handle files and search results correctly.
**Search files**: When building a native search integration, your function must return an array of objects matching the following schema:
```typescript theme={null}
{
url: string,
documentId: string,
title: string,
author?: {
id: string,
name: string,
imgUrl?: string,
},
mimeType: string,
lastSeenByUser: Date,
createdDate: Date,
lastModifiedByAnyone: Date,
lastModifiedByUserId?: {
id?: string,
name?: string,
lastModifiedByUserIdDate: Date,
},
parent?: {
id: string,
title?: string,
url?: string,
type?: string,
driveId?: string,
siteId?: string,
listId?: string,
listItemId?: string,
}
}
```
The *title* and *mimeType* will be displayed in the UI for all search results.
Please also check out a detailed description for each parameter:
### **Required Fields**
| Field | Type | Description |
| :--------- | :----- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| url | string | **The web URL where users can view or edit the file.** This should be a direct link that opens the file in the source application (e.g., Google Docs editor, SharePoint viewer). Must be a valid HTTPS URL that the user can access with their credentials. |
| documentId | string | **The unique identifier of the file in the source system.** This ID is used internally to reference the file and should remain stable across searches. Can be any string format (UUID, numeric ID, etc.) as long as it uniquely identifies the file. |
| title | string | **The display name of the file.** This is what users will see in search results. Should include the file extension if relevant (e.g., "Report.pdf", "Budget.xlsx"). |
| mimeType | string | **The MIME type of the file.** Used to determine the file type icon and category (e.g. "application/pdf", "text/plain", "application/vnd.google-apps.document"). |
### **Optional Fields**
| Field | Type | Description |
| :-------------------------------------------- | :----- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| author | object | **Information about who created the file.** Helps users identify file ownership and origin. |
| author.id | string | **Unique identifier of the author.** Typically an email address or user ID in the source system. |
| author.name | string | **Display name of the author.** The human-readable name shown to users (e.g., "John Doe"). |
| lastSeenByUser | string | **When the current user last viewed this file.** ISO 8601 date string (e.g., "2024-01-15T10:30:00Z"). Used for "Recently viewed" sorting. Return null or omit if the user has never viewed the file. |
| createdDate | string | **When the file was originally created.** ISO 8601 date string. Helps users understand file age and sort by creation date. |
| lastModifiedByAnyone | string | **When the file was last modified by any user.** ISO 8601 date string. Critical for identifying recently updated content and collaborative work. |
| lastModifiedByUserId | object | **Information about who last modified the file.** Helps track recent changes in collaborative environments. Entire object should be omitted if any required sub-field is missing. |
| lastModifiedByUserId.id | string | **Unique identifier of the last editor.** Typically an email or user ID. |
| lastModifiedByUserId.name | string | **Display name of the last editor.** Human-readable name of who made the last changes. |
| lastModifiedByUserId.lastModifiedByUserIdDate | string | **Timestamp of the last modification.** ISO 8601 date string. Usually matches lastModifiedByAnyone. |
| parent | object | **Information about the file's location/container.** Helps users understand file organization and navigate to parent folders. |
| parent.id | string | **Unique identifier of the parent folder/container.** Used for folder-based operations and navigation. |
| parent.title | string | **Display name of the parent folder.** Shown to help users understand file location (e.g., "Marketing Materials", "Q1 Reports"). |
| parent.url | string | **Web URL to view the parent folder.** Direct link to open the folder in the source application. |
| parent.type | string | **Type of parent container.** Optional classifier (e.g., "folder", "workspace", "site"). |
| parent.driveId | string | **Identifier of the drive/library containing the file.** For services with multiple storage locations (e.g., SharePoint sites, Google Shared Drives). |
| parent.siteId | string | **Identifier of the site containing the file.** Specific to SharePoint and similar platforms with site-based organization. |
| parent.listId | string | **Identifier of the list containing the file.** For list-based storage systems. |
| parent.listItemId | string | **Identifier of the list item associated with the file.** For files attached to list items. |
| contentPreview | string | **A text snippet from the file's content.** Provides context about file contents in search results. Should be plain text, typically 100-200 characters. Useful for showing relevant excerpts that match search queries. Set to null if content preview is not available. |
### **Usage Guidelines**
**Dates**: \
All date fields must be valid ISO 8601 strings or omitted entirely. Invalid dates will cause parsing errors.
**Null vs Omitted**:
* Use null for fields that are explicitly empty (e.g., no content preview available)
* Omit fields entirely if the data is not applicable or unavailable
**Parent Information**: Include as much parent information as available to help users navigate file hierarchies
* **Author Information**: Always include both id and name in author objects, or omit the entire object
* **Search Relevance**: Fields like contentPreview can significantly improve search UX by showing why a file matched the query
Below you can find an example implementation for the native SharePoint *Search files* action.
```typescript theme={null}
const entityTypes = ["driveItem"];
const queryString = data.input.query;
try {
// Perform search if query exists
const searchRequest = {
requests: [
{
entityTypes,
query: { queryString },
trimDuplicates: true,
queryAlterationOptions: {
enableModification: true,
enableSuggestions: true,
},
},
],
};
const searchResult = await ld.request({
method: 'POST',
url: 'https://graph.microsoft.com/v1.0/search/query',
body: searchRequest,
headers: {
'Authorization': `Bearer ${data.auth.access_token}`,
'Content-Type': 'application/json',
},
});
const hits = searchResult.json?.value?.[0]?.hitsContainers?.[0]?.hits;
if (hits && hits.length > 0) {
const results = hits.filter((hit) => hit.resource.name).map((hit) => {
const { resource } = hit;
return {
url: encodeURI(`${resource.webUrl}?web=1`),
documentId: resource.id,
title: resource.name,
mimeType: getMimeTypeFromFileName(resource.name),
author: resource.createdBy?.user ? {
id: resource.createdBy.user.email || '',
name: resource.createdBy.user.displayName || '',
} : undefined,
createdDate: resource.createdDateTime,
lastModifiedByAnyone: resource.lastModifiedDateTime,
lastModifiedByUserId: resource.lastModifiedBy?.user ? {
id: resource.lastModifiedBy.user.email || '',
name: resource.lastModifiedBy.user.displayName || '',
lastModifiedByUserIdDate: resource.lastModifiedDateTime,
} : undefined,
parent: resource.parentReference ? {
id: resource.parentReference.id || '',
title: resource.parentReference.path ? resource.parentReference.path.split('/').pop() : undefined,
driveId: resource.parentReference.driveId || '',
} : undefined,
};
});
return results;
}
return [];
} catch (error) {
ld.log(`Error: ${error.message}, Stack: ${error.stack}`);
return [];
}
function getMimeTypeFromFileName(fileName) {
const extension = fileName.split('.').pop().toLowerCase();
const mimeTypes = {
'txt': 'text/plain',
'doc': 'application/msword',
'docx': 'application/vnd.openxmlformats-officedocument.wordprocessingml.document',
'pdf': 'application/pdf',
};
return mimeTypes[extension] || 'application/octet-stream';
}
```
**Download file**\
For native download actions, return an object in the following format:
```typescript theme={null}
{
data: response.data,
fileName: string,
mimeType: string,
buffer: response.buffer, // Conditional
url: string,
lastModified: Date,
text: string // Conditional
}
```
The *fileName* and *mimeType* will be displayed in the UI for all search results.
Please also check out a detailed description for each parameter:
### Overview
| Field | Type | Required | Description |
| :----------- | :---------- | :-------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| fileName | string | **Yes** | **The complete filename including extension.** This is the name that will be used when saving the file. Should match the original filename from the source system (e.g., "Budget\_2024.xlsx", "Design\_Final.pdf"). If the file doesn't have an extension in the source, add the appropriate one based on mimeType. |
| mimeType | string | **Yes** | **The MIME type identifying the file format.** Determines how the file will be processed and what icon to display. Must be a valid MIME type (e.g., "application/pdf", "image/png", "text/plain"). For Google Workspace files, use the original MIME type, not the exported format's type. |
| buffer | Buffer | **Conditional** | **The binary content of the file as a Buffer.** Required for binary files (PDFs, images, Office docs). The actual file data that will be saved. Can be provided as: native Buffer object, base64 encoded string, or object with format type:"Buffer", data: \[byte array]. |
| url | string | **Yes** | **The web URL to view/edit the file in its source application.** Should be a direct link that opens the file when clicked (e.g., Google Drive viewer URL, SharePoint document URL). Used for users to access the original file and for reference tracking. |
| lastModified | string/Date | **Yes** | **ISO 8601 timestamp of the file's last modification.** Indicates when the file content was last changed. Can be a Date object or ISO string like "2024-01-15T10:30:00Z". Used for version tracking and determining file freshness. |
| data | any | **No** | **Raw response data from the API call.** Optional field that can include additional metadata from the source system. This is typically the raw JSON response and is used for debugging or accessing extra properties not mapped to other fields. Not processed by the system. |
| text | string | **Conditional** | **The text content of the file as a UTF-8 string.** Required for text-based files instead of buffer. Use for plain text, HTML, JSON, CSV, or any human-readable format. Should contain the complete file content. Cannot be used together with buffer. |
### **Important Notes**
**Content Fields**: You must provide either buffer OR text, never both:\
Use buffer for: Images, PDFs, Office documents, videos, any binary format\
Use text for: Plain text, HTML, source code, JSON, XML, any text format
**Data Field**: The data field is optional and typically contains the raw API response. It's not used by the core system but can be helpful for:
* Debugging integration issues
* Preserving additional metadata
* Passing through vendor-specific properties
**MIME Type Accuracy**: The mimeType must accurately reflect the content being returned:
* For native Google Docs exported as HTML, still use "application/vnd.google-apps.document"
* For converted files, use the original source MIME type, not the export format
**File Naming**: The fileName should:
* Include the correct file extension
* Match what users expect from the source system
* Be sanitized to remove invalid filesystem characters
**URL Requirements**: The url must:
* Be accessible with the user's authentication
* Open the file in the source application (not a download link)
* Be a stable link that won't expire quickly
Below you can find an example implementation for the native SharePoint *Download file* action.
```typescript theme={null}
async function downloadOneDriveFile() {
try {
// Construct the API path based on the input configuration
const config = JSON.parse(data.input.parent);
let apiPath = '';
if (config.listId && config.listItemId) {
apiPath = `/sites/${config.siteId}/lists/${config.listId}/items/${data.input.itemId}/driveItem`;
} else if (config.driveId) {
apiPath = `/drives/${config.driveId}/items/${data.input.itemId}`;
} else if (config.groupId) {
apiPath = `/groups/${config.groupId}/drive/items/${data.input.itemId}`;
} else if (config.userId) {
apiPath = `/users/${config.userId}/drive/items/${data.input.itemId}`;
} else if (config.siteId) {
apiPath = `/sites/${config.siteId}/drive/items/${data.input.itemId}`;
} else {
throw new Error('Insufficient information to construct API path');
}
// Make the request to get the file metadata including download URL
const options = {
method: 'GET',
url: `https://graph.microsoft.com/v1.0${apiPath}`,
headers: {
'Authorization': 'Bearer ' + data.auth.access_token,
'Accept': 'application/json',
},
};
const response = await ld.request(options);
if (response.json['@microsoft.graph.downloadUrl']) {
const downloadUrl = response.json['@microsoft.graph.downloadUrl'];
// Request to download the file content
const contentOptions = {
method: 'GET',
url: downloadUrl,
responseType: 'stream'
};
const contentResponse = await ld.request(contentOptions);
if (contentResponse.status !== 200) {
throw new Error(
`Error fetching file content: ${JSON.stringify(contentResponse)}`
);
}
return {
fileName: response.json.name,
mimeType: response.json.file.mimeType,
buffer: contentResponse.buffer,
url: response.json.webUrl,
lastModified: response.json.lastModifiedDateTime,
};
} else {
throw new Error('Could not download file!');
}
} catch (error) {
ld.log('Error downloading item from OneDrive: ' + error.message);
throw error;
}
}
return downloadOneDriveFile();
```
## Accessing Input Fields
Use `data.input.{inputFieldId}` for input field values and `data.auth.{authenticationFieldId}` for authentication field values from the user's current connection.
## Built-in Functions for Custom Code Sections
Use our [Integration Agent](/resources/integrations/agent) to help set up your integration functions.
Custom code sections have access to a set of built-in utility functions for common operations. Here are the most commonly used:
### Essential Functions
* **`ld.request()`** - Make HTTP requests to external APIs
* **`ld.log()`** - Output debugging information
* **`atob()` / `btoa()`** - Base64 encoding/decoding
* **`JSON.stringify()` / `JSON.parse()`** - JSON manipulation
View all available sandbox utilities including data conversions (CSV, Parquet, Arrow), SQL validation, cryptography, AWS request signing, Microsoft XMLA integration, and more.
### Quick Examples
**HTTP Request**
```javascript theme={null}
const options = {
method: "GET",
url: `https://www.googleapis.com/calendar/v3/calendars/${data.input.calendarId}/events/${data.input.eventId}`,
headers: {
Authorization: "Bearer " + data.auth.access_token,
Accept: "application/json",
},
};
const response = await ld.request(options);
return response.json;
```
**JSON Parsing**
```javascript theme={null}
const properties = data.input.properties
? JSON.parse(data.input.properties)
: {};
const options = {
method: "PATCH",
url: `https://api.hubapi.com/crm/v3/objects/companies/${data.input.companyId}`,
headers: {
Authorization: "Bearer " + data.auth.access_token,
"Content-Type": "application/json",
},
body: { properties },
};
```
**Base64 Encoding**
```javascript theme={null}
const auth = btoa(`${env.CLIENT_ID}:${env.CLIENT_SECRET}`);
```
### Sandbox Library Restrictions
> Custom integration code runs in a secure sandboxed environment.
> **You cannot install or import external libraries (npm, pip, etc.) - only a limited set of built-in JavaScript/Node.js APIs are available.**
> For advanced processing (e.g., PDF parsing, image manipulation), use external APIs or services and call them from your integration code.
## Best Practices
### Action Design
* **Single responsibility**: Each action should do one thing well
* **Clear naming**: Use descriptive action names that explain the purpose
* **Input validation**: Always validate required inputs and provide helpful error messages
* **Error handling**: Catch and handle API errors gracefully
* **Logging**: Use `ld.log()` to help with debugging
### ID Handling
Most API calls require specific internal IDs. The challenge is that agents can't guess these IDs, which creates a poor user experience when calling actions like "get specific contact in HubSpot" or "add event to specific calendar in Google Calendar."
The solution: Create helper actions that retrieve and return these IDs to the agent first. For example, our `Get deal context` function for HubSpot uses GET endpoints to gather internal IDs for available pipelines and stages. This enables agents to use actions like `Create deal` or `Update deal` much more effectively since they now have the required context.
### Performance
* **Minimize API calls**: Batch operations when possible
* **Use pagination**: Handle large datasets appropriately
* **Timeout handling**: Set appropriate timeouts for external API calls
### Security
* **Validate inputs**: Never trust user input without validation
* **Sanitize data**: Clean data before sending to external APIs
* **Handle secrets**: Use authentication fields for sensitive data, never hardcode
* **Rate limiting**: Respect API rate limits and implement backoff strategies
***
# FAQ & Troubleshooting
Source: https://docs.langdock.com/resources/integrations/faq
We have collected a few frequently asked questions and steps to resolve issues.
## SharePoint, OneDrive, Confluence, Google Drive
### Which permissions does my integration have?
Integrations use either API keys or OAuth 2.0 to access external tools. If you're redirected to another tool's login, you're using OAuth authentication.
All OAuth integrations inherit your existing permissions from the connected tool. This means **you'll only find files you already have access to** in that tool.
Example: User A saves a document in SharePoint without sharing it. User B authenticates through Langdock and searches for that file, but can't find it because User A hasn't granted access. Once User A shares the file, User B will be able to find it through Langdock.
### Can I use OneDrive, SharePoint, Confluence and Google Drive to find information?
Currently, you need to add individual files to chat or agent knowledge to work with your integrations. This works well when you want to analyze specific documents.
For broader search across all your tools (similar to web search), the technical complexity is higher because we need to download and vectorize entire folder structures rather than using the limited keyword-based search APIs these platforms provide. This is among our highest priorities and something many customers request, so we're working on a solution as quickly as possible.
### Is all content of the integration being imported into Langdock?
No, Langdock uses each platform's search API to find files by keyword or name. Only when you select a specific file is it imported into Langdock.
### How often is my content updated when using ?
When searching for files, we query the external tool in real-time, so you'll see new files immediately.
Files already imported into agent knowledge sync once daily. You can also trigger manual synchronization anytime.
***
## General
### I receive an error when using my integration
If an action fails, check these common fixes:
**Input parameters**
Some requests need specific IDs or formatted queries. If the parameter description doesn't provide enough detail, add this information to your agent instructions or specify it in your chat prompt.
**OAuth connection**
Actions require a valid OAuth connection. Add one by requesting a specific action for that integration in chat, or connect directly via the integration menu.
# File Support in Custom Integrations
Source: https://docs.langdock.com/resources/integrations/file-support-for-actions
Handle file inputs and outputs in custom actions and triggers.
## 1. File Input in Actions
File inputs allow users to upload files that your action can then process or send to external APIs.
### 1.1 When to Use File Inputs
* **Upload files to external tools**: Send user-uploaded documents to APIs, cloud storage, or external services like email or tickets systems.
* **Process user files**: Analyze, convert, or transform files uploaded by users
### 1.2 Adding File Input Fields
#### Single File Input
Add an input field of type "FILE" to accept one file
#### Multiple File Input
For multiple files, enable *Allow multiple files*:
### 1.3 Accessing File Data in Code
Every uploaded file is delivered as a **`FileData`** object with this exact format:
```typescript theme={null}
{
fileName: "Invoice.pdf",
mimeType: "application/pdf",
size: 102400, // bytes
base64: "JVBERi0xLjQK...", // binary content, Base64-encoded
lastModified: "2024-01-15T10:30:00Z" // ISO date string
}
```
> **Important**: File inputs do NOT include a `text` property. The `text` shortcut only exists for file outputs.
#### Single File Access
```javascript theme={null}
// 'document' in this example is the id of the created input field
const document = data.input.document; // FileData object
const buffer = Buffer.from(document.base64, "base64");
ld.log(`Processing ${document.fileName} (${document.size} bytes)`);
```
#### Multiple Files Access
```javascript theme={null}
// 'attachments' in this example is the id of the created input field
const files = data.input.attachments; // FileData[] array
for (const file of files) {
const buffer = Buffer.from(file.base64, "base64");
await processFile(buffer, file.mimeType);
}
```
> **Data Structure**: When "Allow multiple files" is enabled, `data.input.fieldName` is an array. Otherwise, it's a single object.
### 1.4 Common Input Patterns
```javascript theme={null}
// Gmail-style email with file attachments
const recipient = data.input.mailRecipient;
const subject = data.input.mailSubject;
const body = data.input.mailBody;
const attachments = data.input.attachments || [];
// Build multipart email
let email = `To: ${recipient}\r\nSubject: ${subject}\r\n`;
if (attachments.length > 0) {
const boundary = `boundary_${Date.now()}`;
email += `Content-Type: multipart/mixed; boundary="${boundary}"\r\n\r\n`;
email += `--${boundary}\r\nContent-Type: text/html\r\n\r\n${body}\r\n`;
// Add each attachment
for (const attachment of attachments) {
email += `--${boundary}\r\n`;
email += `Content-Type: ${attachment.mimeType}\r\n`;
email += `Content-Transfer-Encoding: base64\r\n`;
email += `Content-Disposition: attachment; filename="${attachment.fileName}"\r\n`;
email += `\r\n${attachment.base64}\r\n`;
}
email += `--${boundary}--`;
}
```
```javascript theme={null}
const document = data.input.document;
// Validate file type
if (!document.mimeType.startsWith('application/pdf')) {
throw new Error('Only PDF files are supported');
}
try {
const response = await ld.request({
method: 'POST',
url: 'https://api.example.com/documents',
headers: {
'Authorization': `Bearer ${data.auth.apiKey}`,
'Content-Type': 'application/json'
},
body: {
filename: document.fileName,
content: document.base64,
mimeType: document.mimeType
}
});
return {
success: true,
documentId: response.json.id,
message: `Uploaded ${document.fileName} successfully`
};
} catch (error) {
return {
success: false,
error: `Failed to upload ${document.fileName}: ${error.message}`
};
}
```
```javascript theme={null}
const files = data.input.files || [];
if (files.length === 0) {
return { error: 'No files provided' };
}
const results = [];
for (const file of files) {
try {
const buffer = Buffer.from(file.base64, 'base64');
// Process based on file type
let result;
if (file.mimeType.startsWith('image/')) {
result = await processImage(buffer);
} else if (file.mimeType === 'application/pdf') {
result = await processPDF(buffer);
} else {
throw new Error(`Unsupported file type: ${file.mimeType}`);
}
results.push({
filename: file.fileName,
status: 'success',
result: result
});
ld.log(`Processed ${file.fileName} successfully`);
} catch (error) {
results.push({
filename: file.fileName,
status: 'error',
error: error.message
});
ld.log(`Failed to process ${file.fileName}: ${error.message}`);
}
}
return {
success: true,
processed: results.filter(r => r.status === 'success').length,
failed: results.filter(r => r.status === 'error').length,
results: results
};
```
### 1.5 Input Validation & Error Handling
```javascript theme={null}
// Validate file presence
const files = data.input.attachments;
if (!files || files.length === 0) {
return { error: "No files provided. Please attach at least one file." };
}
// Validate file types
const allowedTypes = ["application/pdf", "image/jpeg", "image/png"];
for (const file of files) {
if (!allowedTypes.includes(file.mimeType)) {
return {
error: `Unsupported file type: ${
file.mimeType
}. Allowed: ${allowedTypes.join(", ")}`,
};
}
}
// Log file metadata for debugging
ld.log(
"Processing files:",
files.map((f) => ({
fileName: f.fileName,
mimeType: f.mimeType,
size: f.size,
}))
);
```
## 2. File Output in Actions
File outputs allow your action to generate and return files that users can download or use in subsequent actions.
### 2.1 When to Use File Outputs
* **Generate reports**: Create PDFs, spreadsheets, or documents from data
* **Retrieve files from APIs**: Download files from external services
* **Transform files**: Convert between formats or process uploaded files
* **Export data**: Create CSV exports, backup files, or data dumps
### 2.2 File Output Format
Return files under a `files` key in your response:
```javascript theme={null}
// Single file output
return {
files: {
fileName: "report.pdf",
mimeType: "application/pdf",
base64: "JVBERi0xLjQK...", // Base64 encoded content
},
};
// Multiple files output
return {
files: [
{
fileName: "data.csv",
mimeType: "text/csv",
text: "Name,Email\nJohn,john@example.com", // Text shortcut for UTF-8
},
{
fileName: "chart.png",
mimeType: "image/png",
base64: "iVBORw0KGgoAAAANSUhEUgAA...",
},
],
};
```
#### File Output Properties
| Field | Required | Notes |
| -------------- | -------- | ------------------------------------------ |
| `fileName` | ✓ | Include proper file extension |
| `mimeType` | ✓ | Accurate MIME type for proper handling |
| `base64` | ✓\* | Base64 encoded binary content |
| `text` | ✓\* | UTF-8 text content (alternative to base64) |
| `lastModified` | – | ISO date string (defaults to current time) |
\*Provide either `base64` OR `text`, never both.
### 2.3 Common Output Patterns
```javascript theme={null}
// Generate a PDF report from data
const reportData = await fetchReportData();
// Create PDF content (using custom built function / endpoint)
const pdfBuffer = await generatePDF({
title: 'Monthly Report',
data: reportData,
template: 'standard'
});
return {
files: {
fileName: `monthly-report-${new Date().toISOString().slice(0,7)}.pdf`,
mimeType: 'application/pdf',
base64: pdfBuffer.toString('base64')
},
success: true,
message: `Generated report with ${reportData.length} entries`
};
```
```javascript theme={null}
// Export data as CSV using text shortcut
const customers = await fetchCustomers();
// Build CSV content
const csvHeader = 'Name,Email,Created,Status';
const csvRows = customers.map(c =>
`"${c.name}","${c.email}","${c.created}","${c.status}"`
);
const csvContent = [csvHeader, ...csvRows].join('\n');
return {
files: {
fileName: `customers-export-${new Date().toISOString().slice(0,10)}.csv`,
mimeType: 'text/csv',
text: csvContent // Use text for UTF-8 content
},
success: true,
exported: customers.length
};
```
```javascript theme={null}
// Download file from external API
const fileId = data.input.fileId;
try {
const response = await ld.request({
method: 'GET',
url: `https://api.example.com/files/${fileId}`,
headers: {
'Authorization': `Bearer ${data.auth.apiKey}`
}
});
// Get filename from response headers or API
const filename = response.headers['content-disposition']
?.match(/filename="(.+)"/)?.[1] || `file-${fileId}`;
return {
files: {
fileName: filename,
mimeType: response.headers['content-type'] || 'application/octet-stream',
base64: response.body // Assuming API returns base64
},
success: true,
message: `Downloaded ${filename}`
};
} catch (error) {
return {
success: false,
error: `Failed to download file: ${error.message}`
};
}
```
```javascript theme={null}
// Process input files and return modified versions
const inputFiles = data.input.documents || [];
const outputFiles = [];
for (const file of inputFiles) {
try {
const buffer = Buffer.from(file.base64, 'base64');
// Process file (e.g., compress, convert, etc.)
const processedBuffer = await processFile(buffer, file.mimeType);
outputFiles.push({
fileName: `processed-${file.fileName}`,
mimeType: file.mimeType,
base64: processedBuffer.toString('base64')
});
} catch (error) {
ld.log(`Failed to process ${file.fileName}: ${error.message}`);
}
}
return {
files: outputFiles,
success: true,
processed: outputFiles.length,
message: `Processed ${outputFiles.length} of ${inputFiles.length} files`
};
```
## 3. File Output in Triggers
Triggers can also return files when detecting events that include file attachments or when generating files based on trigger data.
> **Key Difference**: Unlike actions that return objects directly, triggers must return an **array** of events. Each event needs `id`, `timestamp`, and `data` properties, with files inside the `data` object.
### 3.1 When to Use File Outputs in Triggers
* **Email attachments**: Forward attachments from incoming emails
* **File change events**: Return modified or new files from monitored systems
* **Generated notifications**: Create summary files or reports when events occur
* **API webhook files**: Process and forward files from webhook payloads
### 3.2 Trigger File Output Format
Triggers must return an array of objects with `id`, `timestamp`, and `data` properties. Files go **inside** the `data` object:
```javascript theme={null}
// Required trigger format with files
return [
{
id: "msg_123", // Unique identifier for this event
timestamp: "2024-01-15T10:30:00Z", // Event timestamp
data: {
from: "sender@example.com",
subject: "Invoice #12345",
body: "Please find the invoice attached.",
messageId: "msg_123",
files: [
// Files go INSIDE data object
{
fileName: "invoice-12345.pdf",
mimeType: "application/pdf",
base64: "JVBERi0xLjQK...",
},
],
},
},
];
```
> **Important**: Unlike actions, triggers must return an array of events, and `files` must be inside the `data` object, not at the top level.
### 3.3 Common Trigger Patterns
```javascript theme={null}
// Gmail trigger processing incoming emails
const emails = await fetchNewEmails();
const results = [];
for (const email of emails) {
const attachments = [];
// Process email attachments if any
if (email.attachments && email.attachments.length > 0) {
for (const attachment of email.attachments) {
// Download attachment content
const content = await downloadAttachment(attachment.id);
attachments.push({
fileName: attachment.filename,
mimeType: attachment.mimeType,
base64: content.toString('base64')
});
}
}
results.push({
id: email.id, // Required: unique event ID
timestamp: email.date, // Required: event timestamp
data: {
from: email.from,
subject: email.subject,
body: email.body,
receivedAt: email.date,
messageId: email.id,
threadId: email.threadId,
files: attachments // Files inside data object
}
});
}
return results; // Return array of trigger events
```
```javascript theme={null}
// Monitor for new files in a directory
const newFiles = await checkForNewFiles(data.input.directory);
const results = [];
for (const file of newFiles) {
try {
// Download file content
const content = await downloadFile(file.path);
results.push({
id: file.id || file.path, // Required: unique file identifier
timestamp: file.lastModified, // Required: event timestamp
data: {
filePath: file.path,
fileName: file.name,
size: file.size,
modifiedAt: file.lastModified,
event: 'file_created',
files: [ // Files array inside data object
{
fileName: file.name,
mimeType: file.mimeType || 'application/octet-stream',
base64: content.toString('base64')
}
]
}
});
} catch (error) {
ld.log(`Failed to process file ${file.name}: ${error.message}`);
}
}
return results; // Return array of trigger events
```
## 4. Platform Constraints & Limits
| Constraint | Limit |
| :----------------------------- | :------------ |
| **Total file size per action** | **100 MB** |
| **Maximum files per action** | **20 files** |
| Individual documents | ≤ 256 MB\* |
| Individual images | ≤ 20 MB |
| Individual spreadsheets | ≤ 30 MB |
| Individual audio files | ≤ 200 MB\* |
| Individual video files | ≤ 20 MB |
| Other file types | ≤ 10 MB |
| **Action execution timeout** | **2 minutes** |
\*Still bounded by the 75 MB total limit.
> **Validation**: Exceeding limits throws an error **before your code executes**.
```json theme={null}
{
"error": "Total file size (120.0 MB) exceeds the action execution limit of 75.0 MB. Please use smaller files or reduce the number of files."
}
```
### 4.1 Sandbox Library Restrictions
> Custom action and trigger code runs in a secure sandboxed environment.\
> **You cannot install or import external libraries (npm, pip, etc.) - only a limited set of built-in JavaScript/Node.js APIs are available.**\
> For advanced file processing (e.g., PDF parsing, image manipulation), use external APIs or services and call them from your code.
## 5. Best Practices
### File Input Best Practices
* **Validate file types early**: Check MIME types before processing
* **Handle missing files gracefully**: Use `|| []` for optional file arrays
* **Log file metadata**: Help debug issues without exposing content
* **Provide clear error messages**: Tell users exactly what went wrong
```javascript theme={null}
// Good validation pattern
const files = data.input.attachments || [];
if (files.length === 0) {
return { error: "Please attach at least one file to process." };
}
const allowedTypes = ["image/jpeg", "image/png", "application/pdf"];
for (const file of files) {
if (!allowedTypes.includes(file.mimeType)) {
return {
error: `File "${file.fileName}" has unsupported type ${
file.mimeType
}. Allowed: ${allowedTypes.join(", ")}`,
};
}
}
```
### File Output Best Practices
* **Use meaningful filenames**: Include dates, IDs, or descriptive names
* **Set accurate MIME types**: Enables proper file handling and previews
* **Use text shortcut for UTF-8**: More efficient than base64 for text files
* **Include processing status**: Help users understand what happened
```javascript theme={null}
// Good output pattern
return {
files: {
fileName: `customer-report-${new Date().toISOString().slice(0, 10)}.pdf`,
mimeType: "application/pdf",
base64: reportBuffer.toString("base64"),
},
success: true,
recordsProcessed: customerData.length,
message: `Generated report with ${customerData.length} customer records`,
};
```
### Performance Best Practices
* **Process files in parallel when possible**: Use `Promise.all()` for independent operations
* **Avoid loading all files into memory**: Process one at a time for large batches
* **Log progress**: Use `ld.log()` to track processing status
* **Handle errors gracefully**: Continue processing other files if one fails
```javascript theme={null}
// Good error handling pattern
const results = [];
for (const file of files) {
try {
const result = await processFile(file);
results.push({ fileName: file.fileName, status: "success", result });
ld.log(`✓ Processed ${file.fileName}`);
} catch (error) {
results.push({
fileName: file.fileName,
status: "error",
error: error.message,
});
ld.log(`✗ Failed ${file.fileName}: ${error.message}`);
}
}
```
## 6. Troubleshooting
| Issue | Likely Cause | Solution |
| ------------------- | ------------------------------ | ------------------------------------------- |
| "File not found" | User didn't attach file | Make file input required or add validation |
| Size limit error | Files exceed 100 MB total | Ask for smaller files or fewer files |
| "Invalid file type" | Wrong MIME type | Validate `file.mimeType` in your code |
| Empty file content | Base64 encoding issue | Verify `file.base64` is valid |
| Timeout errors | Large files or slow processing | Optimize processing or reduce file sizes |
| Memory errors | Too many large files | Process files sequentially, not in parallel |
### Debug Helper
```javascript theme={null}
// Safe logging for file metadata (never logs content)
function logFileInfo(files) {
const fileInfo = Array.isArray(files) ? files : [files];
ld.log(
"File info:",
fileInfo.map((f) => ({
fileName: f.fileName,
mimeType: f.mimeType,
size: f.size,
hasBase64: !!f.base64,
hasText: !!f.text, // Only exists for outputs
}))
);
}
```
***
# Folder Sync
Source: https://docs.langdock.com/resources/integrations/folder-sync
Sync folders from SharePoint or Google Drive directly to your agents for seamless access to up-to-date knowledge bases
Folder Sync enables you to attach entire folders from SharePoint or Google Drive to your agents, keeping your knowledge base automatically synchronized with daily updates. This feature extends beyond individual file attachments, allowing you to work with larger document collections while maintaining access to the latest versions.
Folder Sync requires the SharePoint or Google Drive integrations to be enabled in your workspace. Contact your admin if these integrations are not available.
## How Folder Sync Works
Unlike [direct file attachments](/resources/faq/knowledge-folders-and-direct-attachments) that send entire documents to the AI model, Folder Sync uses semantic search to identify relevant sections from your synced folders. This approach enables working with larger document collections while staying within AI model context limits.
* **Processing method**: Semantic search identifies relevant sections
* **File limit**: Up to 200 files per folder
* **Best for**: Large document collections, FAQ agents, department knowledge bases
* **Updates**: Daily automatic synchronization
* **Context**: Only relevant sections sent to model
* **Processing method**: Entire documents sent to model
* **File limit**: Up to 20 files per agent
* **Best for**: Small, frequently referenced documents
* **Updates**: Manual re-upload required
* **Context**: Complete documents available to model
## Setting Up Folder Sync
Open the agent where you want to attach a synced folder and go to the Knowledge section.
Click the **Attach** button in your knowledge section and search for your folder by name.
Use specific folder names to quickly locate the content you need. The search function looks through all accessible folders in your connected integrations.
Before attaching, you'll see a dialog showing the sync parameters for your selected folder:
* **Daily synchronization**: Files sync once per day for up-to-date content
* **200 file maximum**: Only the first 200 files will be processed
* **File type restrictions**: Spreadsheets and images are excluded
* **Initial sync duration**: First sync can take up to one hour
Users with access to your agent can interact with all files in the attached folder, even if they don't have direct access to the folder itself. Share your agent carefully.
Review the folder contents and click **Attach folder** to begin the initial synchronization.
Your folder will appear in the agent's knowledge section once the initial sync completes.
## Folder Sync Limitations
Understanding these technical constraints helps you optimize your folder structure and content strategy:
### File and Folder Limits
Maximum of **5 synced folders** per agent to maintain optimal performance and response quality.
Up to **200 files** per folder are processed. Files beyond this limit are automatically excluded.
### File Type Restrictions
**Supported formats**: Text documents (PDF, DOC, DOCX, TXT, MD), presentations (PPT, PPTX), and other text-based files.
**Excluded formats**: Spreadsheets (XLS, XLSX, CSV), images (PNG, JPG, GIF), and tabular data files.
The exclusion of spreadsheets and images is due to technical processing requirements. Spreadsheets need specialized data analysis functionality, while images require different processing methods that aren't compatible with the semantic search approach used in Folder Sync.
### Subfolder Handling
Subfolders count toward the 200-file limit, but attaching subfolders directly as separate synced folders often provides better results:
* **Reduced file skipping**: Direct subfolder attachment decreases the probability of important files being excluded
* **Better organization**: Separate synced folders for different topics or departments improve content relevance
* **Clearer context**: Focused folder scope helps the semantic search identify more relevant content sections
## Sync Behavior and Timing
### Automatic Updates
Your synced folders refresh **once daily** to ensure agents have access to the latest document versions. This automatic process:
* Detects new files added to the folder
* Updates existing files that have been modified
* Removes files that have been deleted from the source folder
* Maintains the 200-file limit by processing files in the order they appear in the source system
### Initial Sync Duration
The first synchronization can take up to **one hour** depending on folder size and file complexity. Subsequent daily updates are typically much faster.
During the initial sync, the system:
1. Downloads and processes each file for semantic search
2. Creates searchable indexes for content discovery
3. Validates file formats and applies restrictions
4. Establishes the synchronization schedule
### Manual Refresh
In addition to automatic daily syncs, you can manually refresh a synced folder at any time to get the latest updates immediately:
1. Navigate to your agent's **Knowledge** section
2. Hover over the synced folder you want to refresh
3. Click the **three dots** (⋯) that appear
4. Select **Refresh** from the menu
The manual refresh triggers an immediate sync, which is useful when:
* You've just added important files that need to be available right away
* You want to verify that recent changes have been synchronized
* You're troubleshooting sync issues and want to force an update
## Connection Ownership and User Removal
When you attach a folder from SharePoint or Google Drive to an agent, Langdock uses your personal OAuth connection to authenticate and synchronize the files. This connection is tied to your user account.
### What Happens When a User Is Removed from Langdock
If the user who originally connected a folder is removed from the Langdock workspace, the syncing will break. The files remain attached to the agent, but updates will no longer occur.
When a user is deleted from Langdock:
1. **Files remain attached**: The documents that were already synced stay attached to the agent and remain accessible to users who have access to that agent
2. **Syncing stops**: The OAuth connection that was used to refresh and sync the folder is deleted along with the user account
3. **No automatic recovery**: The system cannot automatically reconnect to the source folder
### How to Restore Syncing
To resume synchronization after the original connector is removed:
Open the agent's Knowledge section and remove the folder that is no longer syncing.
Have another user with access to the same source folder re-attach it to the agent.
The new user's OAuth connection will now be used for all future synchronization.
For business-critical agents, consider having multiple admins or team leads with access to the source folders. This ensures continuity if the original connector leaves the organization.
## Access Control and Permissions
### Workspace Configuration
Admins control Folder Sync availability through workspace settings:
1. Navigate to **Settings** > **Roles** in your workspace
2. Configure which user roles can attach synced folders
3. Enable or disable the feature for specific teams or departments
### Agent Sharing Considerations
When you share an agent with synced folders, all users gain access to the folder contents through the agent, regardless of their direct folder permissions in SharePoint or Google Drive.
**Best practices for secure sharing**:
* Review folder contents before attaching to agents
* Use dedicated folders for agent knowledge rather than personal or sensitive directories
* Consider creating agent-specific folders with curated content
* Regularly audit which agents have access to synced folders
## Optimizing Folder Sync Performance
### Folder Structure Recommendations
For best results with the 200-file limit:
* **Organize by topic**: Create focused folders for specific subjects or projects
* **Use descriptive filenames**: Clear names help semantic search identify relevant content
* **Regular cleanup**: Remove outdated or irrelevant files to maximize the value of your 200-file allocation
* **Consider subfolder strategy**: Attach important subfolders separately rather than relying on the parent folder
### Content Quality Tips
* **Consistent formatting**: Well-structured documents improve search accuracy
* **Clear headings**: Proper document structure helps identify relevant sections
* **Avoid duplicates**: Multiple versions of the same content can confuse the semantic search
* **Update frequency**: More frequently updated folders benefit most from daily synchronization
## Troubleshooting Common Issues
**Possible causes**:
* Integration not enabled in workspace
* Insufficient permissions to access the folder
* Folder contains no supported file types
**Solutions**:
* Verify SharePoint or Google Drive integration is active
* Check folder permissions in the source system
* Ensure folder contains at least one supported file format
**Possible causes**:
* Folder exceeds 200-file limit
* Files are unsupported formats
* Sync still in progress
**Solutions**:
* Review folder contents and remove unnecessary files
* Convert spreadsheets to PDF format for inclusion
* Wait for initial sync to complete (up to one hour)
**Possible causes**:
* Source folder permissions changed
* Integration connection issues
* Workspace sync settings modified
**Solutions**:
* Verify continued access to source folder
* Check integration status in workspace settings
* Contact admin to review sync configuration
**Cause**:
The user who originally connected the folder to the agent has been removed from the Langdock workspace. When a user is deleted, their OAuth connection is also deleted, breaking the sync.
**Solutions**:
* Remove the folder from the agent's Knowledge section
* Have another user with access to the source folder re-add it
* The new user's connection will be used for future synchronization
See [Connection Ownership and User Removal](#connection-ownership-and-user-removal) for more details.
Start with your most frequently accessed and well-organized folders to get the best initial experience with Folder Sync.
# Introduction
Source: https://docs.langdock.com/resources/integrations/introduction-integrations
Integrations let your agents interact with other software tools, retrieve data, and take actions automatically. We've pre-built integrations for the most common tools, plus you can build custom ones.
## Integrations
Integrations can be used in agents, which are chatbots that are created for a specific situation and follow instructions based on attached knowledge.
Before integrations, you had to manually copy-paste results between applications or upload documents one by one to your agent's knowledge base. Now, integrations handle the authentication and data flow automatically, letting your agents access live data from tools like Jira, HubSpot, Google Sheets, Excel, Outlook, Google Calendar, and Gmail.
This means your agent can pull the latest project status from Jira, check your calendar for availability, or search through your emails without you lifting a finger.
You can find all available integrations [here](https://www.langdock.com/products/integrations).
## Actions
Actions are the specific tasks your agent can perform with integrated tools. Think searching for documents, downloading files to answer questions based on their content, creating new spreadsheet entries, or updating project statuses.
For each agent, you control which actions it can use, whether it needs your approval before executing them, and what context it needs through the agent's instructions
Check out our detailed guide on using actions [here](/resources/integrations/using-integrations).
# Admin Integration Newsletter
Source: https://docs.langdock.com/resources/integrations/langdock-integration-admin-updates
Join our new notification system for critical integration updates requiring admin action
## Workspace Admin Integration Update Newsletter
We're working hard to improve our integrations product across our platform. As part of this ongoing effort, some updates will **require specific admin actions** to ensure everything runs smoothly. \
To keep you informed about these updates, we are introducing a dedicated notification system.
This is a new approach we are testing to improve admin communication. Your feedback helps us refine this process for everyone.
## What You Receive
By joining the Admin Update List, you'll **receive important information** regarding:
Integrations requiring additional permission through admin approval. This mostly concerns Microsoft Integrations, where permissions have to be granted directly in the Langdock Enterprise App in Azure.
Notifications when existing integration connections need to be re-authenticated by users.
Relevant action functionality changes that impact how integrations work in your workspace.
All updates are sent at least 2 days before deployment, giving you time to plan and execute necessary changes.
## How It Works
We manage this as a read-only Google Group, which means:
* You can join with any email address
* You cannot see other group members
* You receive notifications only
## Join the List
Send an email from your preferred address to:
[Email to join the Admin Integrations Newsletter](mailto:admin-integrations-updates+subscribe@langdock.com)
You'll receive an automated confirmation that we received your request.
Reply to the automated email you receive. This confirms your subscription to the Google Group. You'll receive a final email confirming your membership in the group.
## Need Help?
If you encounter any issues during the subscription process, reach out to our support team. We're here to help ensure you stay informed about critical integration updates.
Consider using your primary admin email address for this subscription to ensure you don't miss important updates.
***
# Model Context Protocol (MCP)
Source: https://docs.langdock.com/resources/integrations/mcp
Langdock’s implementation of the Model Context Protocol (MCP) enables seamless integration with external tools and services, providing powerful extensibility for AI agents and chat interactions.
## What is MCP?
MCP is an open protocol that standardizes how applications provide context to LLMs.
Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
MCP integrations in Langdock reuse the existing integration architecture with MCP-specific extensions, allowing the same action execution system to work for both traditional integrations and MCP servers.
## Langdock MCP Key Features
### Transport Types
* **STREAMABLE\_HTTP**: HTTP-based transport with optional streaming support
* **SSE (Server-Sent Events)**: Real-time communication transport for streaming data
### Authentication Methods
* **No Authentication:** For public MCP servers that don't require identification
* **API Key Authentication:** Simplified key-based authentication with automatic header formatting
* **OAuth Authentication:** Full OAuth 2.0 with DCR support for secure authorization flows
* **Advanced OAuth Authentication:** Full OAuth 2.0 without DCR support
### Tool Integration
**Automatic Discovery:** MCP tools are automatically converted to Langdock actions, maintaining the same confirmation mechanisms and previews you're familiar with.
## Getting Started
Different authentication methods have slightly different connection flows.
1. **Enter URL**
Enter the endpoint URL of your MCP server.
2. **Select Authentication Method**
Choose one of the four authentication types. Follow the respective guide below:
### **No Authentication & API Key Authentication**
For public or API key-protected servers, simply enter the server URL (and API key if required).
### OAuth Authentication
For servers supporting OAuth with PKCE, the connection process is straightforward:
1. Enter the server URL and select OAuth.
2. Click **"+ Add connection"** to initiate the OAuth flow.
3. Once connected, a success popup will confirm the connection.
4. Click **"Test connection"** to verify access and see available tools.
5. Select the tools you want to use, then click **"Save tools"**
### Advanced OAuth Authentication
For servers using OAuth without DCR, the process involves a few extra steps:
1. Enter the server URL and select Advanced OAuth 2.0.
2. Copy the OAuth redirect URL into your app’s API or developer settings.
3. Copy Client ID and Client Secret from your app into Langdock
4. Add Authorization URL (for permissions) and Token URL (for token exchange)
5. Define required OAuth scopes (space or comma-separated per your server's documentation)
6. Test connection and save tools
***
### Zapier MCP Configuration
For Zapier MCP servers, the URL has this format:
`https://mcp.zapier.com/api/mcp/s/[your-api-key]/mcp`
split your URL as follows:
* **Server URL:** [https://mcp.zapier.com/api/mcp/mcp](https://mcp.zapier.com/api/mcp/mcp)
* **API Key:** The encoded string after /s/ in your original URL
***
## Related Documentation
* [Integrations Overview](/resources/integrations/introduction-integrations) - Learn about Langdock's integration system
* [Agent Actions](/resources/integrations/using-integrations) - Understanding how actions work with agents
## Additional Resources
* [General introduction to MCP](https://modelcontextprotocol.io/docs/getting-started/intro)
* [Deep Dive into Protocol Specification ](https://modelcontextprotocol.io/specification/2025-06-18)
# Sandbox Utility Functions
Source: https://docs.langdock.com/resources/integrations/sandbox-utilities
Complete reference for built-in utility functions available in custom integrations, actions, triggers, and workflow code nodes
## Overview
When writing custom code for integrations, actions, triggers, or workflow code nodes, you have access to a set of built-in utility functions. These functions run in a secure sandboxed JavaScript environment that provides essential capabilities without requiring external libraries.
### What is the Sandbox?
The sandbox is a secure, isolated JavaScript execution environment that:
* **Runs untrusted code safely** - Memory-limited and timeout-enforced execution
* **Provides essential utilities** - HTTP requests, data conversions, cryptography
* **Prevents security risks** - No file system access, no dangerous globals like `eval` or `process`
* **Requires no dependencies** - No npm packages or external imports needed
Custom integration code runs in a secure sandboxed environment. **You cannot
install or import external libraries (npm, pip, etc.)** - only the built-in
JavaScript/Node.js APIs documented here are available. For advanced processing
(e.g., PDF parsing, image manipulation), use external APIs or services and
call them from your integration code.
### Where These Utilities Are Available
The sandbox utilities are available in:
* **Custom integration actions** - Code that interacts with external APIs
* **Custom integration triggers** - Code that monitors for events
* **Authentication flows** - OAuth and API key validation code
* **Workflow code nodes** - Custom JavaScript in workflow automations
## HTTP & Networking
### ld.request()
Make HTTP requests to external APIs with automatic JSON handling and error management.
**Parameters:**
```typescript theme={null}
{
method: string; // HTTP method: 'GET', 'POST', 'PUT', 'PATCH', 'DELETE'
url: string; // Full URL to request
headers?: object; // Request headers
params?: object; // URL query parameters
body?: object | string; // Request body (auto-stringified if object)
timeout?: number; // Request timeout in milliseconds
responseType?: string; // 'stream' or 'binary' for file downloads
}
```
**Returns:**
```typescript theme={null}
{
status: number; // HTTP status code
headers: object; // Response headers
json: any; // Response body parsed as JSON
text: string; // Response body as text
buffer: Buffer; // Response body as buffer (for binary data)
}
```
**Example: GET Request**
```javascript theme={null}
const options = {
method: "GET",
url: "https://api.example.com/users/123",
headers: {
Authorization: `Bearer ${data.auth.access_token}`,
Accept: "application/json",
},
};
const response = await ld.request(options);
return response.json;
```
**Example: POST Request with Body**
```javascript theme={null}
const options = {
method: "POST",
url: "https://api.example.com/tickets",
headers: {
Authorization: `Bearer ${data.auth.api_key}`,
"Content-Type": "application/json",
},
body: {
title: data.input.title,
description: data.input.description,
priority: "high",
},
};
const response = await ld.request(options);
return {
ticketId: response.json.id,
url: response.json.url,
};
```
**Example: File Download**
```javascript theme={null}
const options = {
method: "GET",
url: `https://api.example.com/files/${data.input.fileId}/download`,
headers: {
Authorization: `Bearer ${data.auth.access_token}`,
},
responseType: "binary", // or 'stream'
};
const response = await ld.request(options);
return {
files: {
fileName: "document.pdf",
mimeType: "application/pdf",
base64: response.buffer.toString("base64"),
},
};
```
**Example: Form Data Upload**
```javascript theme={null}
const formData = new FormData();
formData.append("file", data.input.file.base64, data.input.file.fileName);
formData.append("description", "Uploaded via Langdock");
const options = {
method: "POST",
url: "https://api.example.com/upload",
headers: {
Authorization: `Bearer ${data.auth.access_token}`,
},
body: formData,
};
const response = await ld.request(options);
return response.json;
```
The `body` parameter is automatically stringified if you pass an object. For
`application/x-www-form-urlencoded` content type, the body is automatically
converted to the appropriate format.
### ld.awsRequest()
Make AWS SigV4-signed requests to AWS services like S3, API Gateway, or custom AWS APIs.
**Parameters:**
```typescript theme={null}
{
method: string; // HTTP method
url: string; // AWS service URL
headers?: object; // Additional headers
body?: object | string; // Request body
region: string; // AWS region (e.g., 'us-east-1')
service: string; // AWS service name (e.g., 's3', 'execute-api')
credentials: { // AWS credentials
accessKeyId: string; // AWS access key ID
secretAccessKey: string; // AWS secret access key
sessionToken?: string; // AWS session token (for temporary credentials)
}
}
```
**Example: S3 File Upload**
```javascript theme={null}
const options = {
method: "PUT",
url: `https://my-bucket.s3.us-east-1.amazonaws.com/${data.input.fileName}`,
headers: {
"Content-Type": data.input.mimeType,
},
body: Buffer.from(data.input.file.base64, "base64"),
region: "us-east-1",
service: "s3",
credentials: {
accessKeyId: data.auth.aws_access_key_id,
secretAccessKey: data.auth.aws_secret_access_key,
},
};
const response = await ld.awsRequest(options);
return {
success: true,
url: options.url,
};
```
## Data Format Conversions
### ld.csv2parquet()
Convert CSV text to Parquet format with optional compression and array support.
**Parameters:**
```typescript theme={null}
{
csvText: string; // CSV data as text
compression?: string; // 'gzip', 'snappy', 'brotli', 'lz4', 'zstd' (default), or 'uncompressed'
}
```
**Returns:** `{ base64: string, success: boolean }`
**Example:**
```javascript theme={null}
const csvText = `name,age,skills
Alice,30,"[Python,JavaScript]"
Bob,25,"[Java,Go]"`;
const result = await ld.csv2parquet(csvText, {
compression: "gzip",
});
return {
files: {
fileName: "data.parquet",
mimeType: "application/vnd.apache.parquet",
base64: result.base64,
},
};
```
CSV columns containing array-like strings (e.g., `"[Python,JavaScript]"`) are
automatically detected and converted to Parquet List columns.
### ld.parquet2csv()
Convert Parquet format to CSV text, handling List columns appropriately.
**Parameters:**
```typescript theme={null}
base64Parquet: string; // Base64-encoded Parquet file
```
**Returns:** `{ base64: string, success: boolean }`
**Example:**
```javascript theme={null}
const parquetBase64 = data.input.parquetFile.base64;
const result = await ld.parquet2csv(parquetBase64);
return {
files: {
fileName: "data.csv",
mimeType: "text/csv",
base64: result.base64,
},
};
```
### ld.arrow2parquet()
Convert Arrow IPC Stream format to Parquet.
**Parameters:**
```typescript theme={null}
{
buffer: Buffer; // Arrow IPC Stream buffer
compression?: string; // Compression type (same as csv2parquet)
}
```
**Returns:** Base64-encoded Parquet file
**Example:**
```javascript theme={null}
const arrowBuffer = Buffer.from(data.input.arrowFile.base64, "base64");
const parquetBase64 = await ld.arrow2parquet(arrowBuffer, {
compression: "snappy",
});
return {
files: {
fileName: "data.parquet",
mimeType: "application/vnd.apache.parquet",
base64: parquetBase64,
},
};
```
### ld.json2csv()
Convert JSON data to CSV format using the nodejs-polars library.
**Parameters:**
```typescript theme={null}
jsonData: Array