Every thunk can integrate with and use other business data in both the planning phase and the execution phase.
In the planning phase:
SOP (Standard Operating Procedure) files can be provided to guide the AI-driven planning process that creates the initial thunk workflow
In the execution phase, there are five potential points of integration:
Workflow inputs: used to create a new workflow request in a thunk. Other applications can create workflow inputs via email, webhooks, or REST APIs.
Compliance policies: these are files used to provide detailed instructions that guide valid AI execution. These files are added to Content Folders during the design phase via file system connections.
Searchable content: these are files used to provide context to AI agents as they do their work. These files are also added to Content Folders during the design phase via file system connections.
AI Tools: these are capabilities added to AI agents to give them access to other applications and data during the course of their work. Thunk.AI supports both custom tools and tool libraries. Some tool libraries are available out of the box. Some get enabled by creating an application connection. In an enterprise environment, MCP connections provide a standardized and efficient way to access a broad range of enterprise applications and data.
Workflow results: typical workflows record their results during and/or at the end of execution by writing information to files or to other systems. In Thunk.AI, this is accomplished by providing the appropriate AI tools to various steps of the workflow. These tools are created, configured, and used in exactly the same manner as the tools used to provide information to the AI agents.
As background context, it is important to consider three elements of the Thunk.AI platform architecture:
The deployment architecture: The Thunk.AI platform may be deployed as a SaaS application in a public cloud, or it may be deployed as a private instance in a customer's own cloud tenant. Read more about the supported deployment options. Each of these deployment options has different mechanics when it comes to enabling integrations.
The AI Agent execution model: One of the services in the platform is a scalable agent host service which acts as the service environment in which every AI agent executes. The AI agents use large language models (LLMs) to interpret instructions and make decisions. The LLMs do not ever directly integrate with other systems. Rather, they make asks of the AI agents running in the agent host service to read files or invoke tool calls.
The use of tools: All interaction between an AI agent and the external environment happens through requested tool calls. These AI agents are always run within a control sandbox environment that limits and checks what they do. Read more about the AI agent execution model. The control sandbox verifies the requests, executes the tool calls, validates the results, and then provides them to the AI agent. Read more about access control to tools.