Skip to main content

An overview of the Thunk.AI platform

A quick tour of all the Thunk.AI concepts

Updated over 3 weeks ago

Thunk.AI is a self-service platform to create and run AI agent workflow applications.

Every AI agent workflow application is called a "thunk". Users can design, create, and execute many thunks using the Thunk.AI platform. They can also invite others to help with the design of their thunks, or to participate in the actual work that is being automated. Each thunk can be configured to be completely automated or to have intentional human-in-the-loop engagement.

Let's look at the key aspects of the Thunk.AI platform:

  1. A user model where every user is paired with an AI agent

  2. An application model for AI-native applications called "thunks".

  3. An intelligent design environment for thunks.

  4. A reliable and secure execution environment for thunks and AI agents within thunks.

  5. A scalable enterprise-ready service that hosts and runs many thunks.


1. User model

Users sign up for an account with Thunk.AI. Each user has an AI agent to assist them and automate their work. The user's AI agent utilizes a generative-AI model (aka a large language model" or LLM) to understand and execute work on their behalf.

1.1. User roles

There are three user roles within a thunk:

  1. Thunk Owner/Designer: this is a user who designs the logic of a thunk. In a traditional application, this would have needed programming skills. But since Thunk.AI is a no-code intelligent platform, the application logic is described in natural language.

  2. Human Agent: this is a user who may be assigned a work step as part of the thunk workflow. The human agent along with their AI agent are responsible for performing that step.

  3. End User: this is a user who can only engage with the thunk by making a workflow request.

Typically, the designers and the human agents are members of the team or organization providing the workflow application or service, while the end user is a consumer of the application.

1.2. Access Control

The platform enforces access control based on the users listed in every thunk and the roles assigned to them.

1.3 User Account Configuration

At the level of the user account, there are some important configuration options that control the behavior of the Thunk.AI platform.

  • Every user has a profile which allows them to describe their job, their role on their team, and desired stylistic behavior of AI when working on their behalf. This acts as important input to the AI agents to help them behave intelligently in meeting the user's expectations.

  • Of couse, generative-AI models (LLMs) lie at the heart of the Thunk.AI platform. The choice of LLM is therefore potentially important for each user. The default model is chosen at the level of the user's account. While the default model is GPT-4.1-mini of GPT-4.1 from OpenAI, the platform also supports the Claude model from Anthropic, Gemini from Google, and other LLM models. Users need to provide their own API keys for these models. In general, when users provide their own API keys, they achieve better direct control over AI usage.

  • Connections to other applications, databases, file systems, and sign-in websites are also maintained at the account level. For example, it is very common for a user to set up a connection to a Google Drive account or a Microsoft Sharepoint account. Once set up at the account level, these connections to business applications become available (in a controllable manner) to thunks owned by the user, and to the AI agents that run inside those thunks.



2. Application model

Each thunk describes a workflow process. The workflow logic is defined at design-time and describes what to do at each step of the process. The thunk owner defines the workflow logic using natural language, articulating high-level goals and outcomes. For example, the thunk owner could say: "for each project proposal, collect past history and reviews of the vendor, check that the proposal covers all aspects of the RFP, and provide an proposal evaluation score according to a specific rubric".

There are three parts to the application model: (a) the Workflow Plan, (b) the Content Folders, (c) the AI Tool Libraries

When you open any thunk, the first three tabs you see in the left pane correspond to these three parts of the application model.

2.1. Workflow Plan

Guided by this instruction, their AI agent forms a workflow "Plan" that has a sequence of steps to execute the process. Later, at runtime, each request will run as a separate workflow instance following the sequence of steps in the plan.

Typically, the thunk owner, with the help of the planning AI agent, also provides descriptions of the logic to run for each step in the plan. We called these logic descriptions "AI Instructions". The AI Instructions are the primary "programming model" for AI in the platform. In the runtime environment, AI agents will be launched to run and execute based on these AI Instructions.
​
Each workflow instance may be long-running -- it may involve multiple steps, it might have to wait for humans to engage, it may have to handoff work from one person to another person (or their AI agent). As a result, each workflow instance maintains structured state to capture the progress of the work. If you are familiar with database applications, you might think of each workflow request being associated with a "database row" that gradually gets filled in as the workflow process is executed.

2.2. Content folders:

Every thunk can define and populate content folders to provide information relevant to the intended work. For example, a customer support thunk may define a folder of product usage documents that can help answer questions.

These content folders can include documents in a variety of formats, images, video, or web pages. They can hold a small curated set of content, or they can hold thousands of documents. The thunk designer can also provide AI instructions to analyze and pre-extract structured properties from content folders. For example, the title and authors of a document might be pre-extracted as structured properties.

The Thunk.AI platform always pre-defines a standard content folder for organizational policy documents, as these are important concepts across all thunks.

2.3. AI tool libraries:

An AI tool is a skill or capability provided to an AI agent to extend what it can do, to bring in information it doesn't have, or for it to record its work. Often, AI tools are used to connect an AI agent to the rest of the business work environment and the other applications and data in that environment. Instead of responding to an instruction with just a message, an AI agent can respond by invoking an AI tool with appropriate parameters.

Some tools are used to fetch information (eg: search a company database) while other tools perform actions (eg: create new documents or update external systems).

Collections of related AI tools are organized into AI tool libraries. The thunk designer can enable or disable AI tools and tool libraries as needed to steer the desired behavior of AI agents.

Here is a list of platform-defined AI tool libraries available to every thunk:

  • Web search module: tools to search the web, answer questions based on web content, read and extract information from web pages

  • File system module: tools to read, create, update, move, and delete files in a cloud file system (Google Drive or Office 365, depending on the user's work environment).

  • Image processing module: tools to edit and query image content

  • Video processing module: tools to edit and query video content

  • Communications module: tools to draft or send email and messages

  • Document templating module: tools to create documents based on parameterized templates

In addition, each thunk designer (with the help of their AI agent) can create their own custom AI tools (eg: to connect to specific enterprise systems or databases). Some of the tools and content collections in a thunk can also be exported as a reusable library, shared with others, and imported for use into another thunk.


3. Intelligent design environment

During the design phase, the thunk owner/designer is assisted by their AI agent to create a thunk.

  • The owner uses concise natural language instructions to describe the desired workflow. This launches AI agent tasks to generate an appropriate workflow plan and appropriate state management data structures.

  • The thunk designer may describe content folders to build. This launches AI agent tasks to automatically construct and initialize the content folders, and create the appropriate retrieval indexes.

  • The thunk designer may describe custom AI tools to build. These requests also launch AI agent tasks to automatically build the tools.

  • The thunk designer may establish connections to external business applications, import AI tool libraries, and modify the various settings that control AI agent behavior.

  • The thunk designer may add and manage human agents and end users who have access to the thunk.

  • At any time during the design phase, the thunk owner can interact with their AI agent, providing instructions or seeking guidance.

Usually, the design phase of a thunk involves iterative authoring, testing, and evaluation -- all of which are natively supported in the Thunk.AI platform.


4. Reliable, secure, and scalable runtime environment

The runtime environment for a thunk has four components:

  1. Interfaces to receive workflow requests and initiate workflow execution for each request

  2. A scalable workflow orchestration layer that follows the steps of the workflow plan

  3. A reliable and secure AI agentic layer that evaluates the AI Instructions at each step. This lies at the heart of the platform and is the most important layer.

  4. A document indexing and search layer that processes content folders and makes them available to the AI agents

4.1. Workflow request interfaces

Workflow requests may be received by a thunk in different ways:

  • Each thunk has an Email Inbox" so that email messages can be sent to it along. with attachments. Each of these messages can be processed by custom AI Instructions to extract information and create one or more workflow requests

  • Each thunk also has a secure Webhook endpoint which can be used to route form submissions to, or send events from external applications.

  • For some use cases, it may be appropriate to submit workflow requests via the REST API which is created for every thunk

  • Every thunk also has a chat-based AI agent that can act upon natural language instructions to create new workflow requests (perhaps by loading them from a spreadsheet or by searching the web, etc)

  • Finally, every thunk also provides interactive user interfaces to load individual workflow requests via a form, or to bulk-load them from files or folders.

4.2. Workflow orchestration

The Thunk.AI runtime runs each request as a separate workflow instance. It maintains the state of the workflow instance, launching the appropriate steps at the right times to evaluate each step of the workflow. Within this execution, there are many tasks that an AI agent (the thunk owner's AI agent or a work participant's AI agent) may need to perform and each of those tasks operates based on AI Instructions specific to the task.

  • When new workflow requests are collected, an AI agent task is launched (on behalf of the thunk owner) to parse the requests and create structured workflow requests

  • An AI agent task is launched (on behalf of the thunk owner) to assign each step to an appropriate human agent.

  • An AI agent task is launched (on behalf of the assigned human agent) to execute the instructions for that workflow step. This may spawn several other tasks as part of getting the work done.

  • At any stage while working on a step, the human agent could choose to interact with their AI agent using a chat interface. Any instructions from the human agent during this interaction may lead to new AI agent tasks.

  • There may be other AI agent tasks launched and executed to verify and validate the progress of the workflow.

The orchestration layer is responsible for launching the right tasks at the right times, handling large scalable data sets, and maintaining workflow state. The AI agent execution layer (described next) is responsible for the execution of each task.

4.3. AI Agentic execution

AI agents utilize generative-AI models (LLMs), and all LLMs have some common core capabilities:

  • The ability to have contextual conversational interaction and follow user-provided instructions.

  • "Knowledge" of a broad collection of information from public sources (though usually a few months stale).

  • The ability to understand and generate content (documents, images, videos).

By themselves however, LLMs and AI agents can only respond to instructions with responses. It is upto the environment to provide them meaningful instructions and to appropriately handle the responses. This is where the AI agentic layer of the Thunk.AI platform plays a crucial role.

  • Each task for the AI agent has coarse-grained application intent -- it is steered by the instructions of the thunk designer to achieve desired outcomes (part of the workflow process).

  • Where appropriate, the runtime environment might instruct the AI agent to dynamically plan and break down a coarse task into a finer-grained sequence of AI actions.

  • The runtime environment might steer the AI agent to execute such a planned finer-grained sequence of AI actions.

  • The AI agent might need to access and modify the workflow state or the broader application environment (eg: the files of the user or internal company databases) and use common productivity tools (eg: browsing the web, sending messages, etc). The AI agentic layer of the platform provides these capabilities via searchable content folders, and via AI tools.

But most importantly, all of these mechanisms provide capability to the AI agents, but they do not achieve control or reliability or security. Those functions are enabled by the control sandbox environment. Every AI agent runs within and is controlled and constrained by a control sandbox environment. This sandbox performs several critical functions:

  1. It invokes the AI agent in a loop until the task is complete.

  2. It sets up the appropriate context and instructions to provide the AI agent.

  3. It limits the responses of the AI agent to a specific set of structured AI tool calls.

  4. It vets and validates every proposed tool call, then executes it, checks its result, and then passes it back to the AI agent for the next iteration of processing.

  5. It implements several mechanisms that mitigate common flaws in LLMs (like hallucination, inconsistency, early termination, etc).

The control sandbox is the most important and novel platform innovation in the Thunk.AI platform. Read more about this important topic here.

AI reliability stems from the effective use of the control sandbox.

4.4. Content management

At runtime, the Thunk.AI platform automatically extracts the appropriate information from each document, and also indexes the content of the whole collection for effective querying. AI agents or work participants can query the content of these collections and retrieve relevant information.

4.5. Configuration and Control

Every thunk has customizable options that control the details of this runtime environment, leading to different levels of "CHARM" attributes (compliance, human-in-the-loop collaboration, automation, reliability, and modularity). These options are particularly important for enterprise applications with larger teams.


5. Enterprise-scale service

The Thunk.AI service hosts the platform and runs thunks for all users. It implements user account management and access control. It maintains the plan and data for every workflow instance in every thunk. It runs the individual AI agent tasks within each workflow instance.

The public instance of the Thunk.AI service is hosted in a public cloud and is accessed through a web browser. As a no-code self-service platform, Thunk.AI allows users to easily sign up and start creating thunks. The platform provides samples as starting points for a new thunk designer. It is easy to add work participants, share thunks with others, and rapidly develop thunk application logic.

As an business application platform, Thunk.AI also supports additional capabilities that enable thunk development and deployment in a work environment. These include pre-built connectors to external applications, versioning, team management, auditing, and automated quality control.

Enterprise customers can choose to have a custom private instance of the Thunk.AI service installed in their own cloud tenants. This provides a greater degree of control over the data and processing in their thunks. Read more about the various deployment options.


This high-level summary is just a quick overview of the breadth of the Thunk.AI platform. All the topics mentioned here are described in greater detail in individual articles at https://info.thunk.ai.

Did this answer your question?