Every thunk can integrate with and use data from other business applications in both the planning phase and the execution phase. Thunk.AI makes it easy to integrate AI automations with every business application.
In this article, we discuss the points of integration between business applications and a thunk, and how those integrations are enabled.
A thunk integrates with business applications via files during the Planning phase, and through several features in the Execution phase.
Planning Phase
In the planning/design phase, SOP (Standard Operating Procedure) files are often the source of instruction or policy for a business process. This requires integration with the File System (Sharepoint, Google Drive, Box, etc).
Execution Phase
In the execution phase, there are two broad categories of interaction between a thunk and external applications.
Inbound integrations: the initiator of activity is an external business application that makes an inbound request to a thunk.
The primary form of inbound requests is to trigger a workflow automation instance. Other applications can create inbound workflow requests via email, webhooks, or REST APIs.
Another form of inbound requests is to invoke an exported tool call via a REST API or via the MCP protocol. This is commonly used to invoke AI logic in a thunk from other AI systems (eg: like Claude or ChatGPT).
Outbound integrations: the initiator of activity is an AI agent running in a thunk, making an outbound request to a business application. This is extremely common and important for AI agents that need to do meaningful work. There are various categories of outbound requests:
To acquire context: files are commonly used to provide context to AI agents as they do their work. These files are also added to Content Folders during the design phase via File System connections.
To fetch data: the appropriate business applications are integrated via connections (most commonly using the MCP protocol, but REST APIs and SQL protocols are also supported). Each connection is associated with a set of AI "tools" (AI-agent-friendly APIs) that give the AI agents the ability to access the relevant data from the business application.
To take actions (eg: update an entry in a business application): just as with fetching data, business systems are integrated via connections, and action tools associated with each connection enable the AI agents to take appropriate actions.
As background context, it is important to consider three elements of the Thunk.AI platform architecture:
The deployment architecture: The Thunk.AI platform may be deployed as a SaaS application in a public cloud, or it may be deployed as a private instance in a customer's own cloud tenant. Read more about the supported deployment options. Each of these deployment options has different mechanics when it comes to enabling integrations.
The MCP protocol: the Model-Context-Protocol (MCP) is an industry-standard way for AI agentic systems to integrate with, fetch information from, and take actions in external business applications. MCP connections in Thunk.AI provide a standardized, efficient, and easy way to enable AI access to a broad range of enterprise applications and data.
The AI Agent execution model: the AI agents in a thunk use large language models (LLMs) to interpret instructions and make decisions. However, the LLMs do not ever directly integrate with other systems. The AI agents and their LLMs are always run within an "AI Guardian" environment that limits and checks what they do. Read more about the AI agent execution model.
The agent may ask the AI Guardian to invoke a tool. The tool calls correspond to requests to read information or make updates in external business applications (via MCP connections). The AI Guardian verifies the requests, executes the tool calls, validates the results, and then provides them to the AI agent.
