Skip to main content
All CollectionsCore Concepts
An overview of the Thunk.AI platform
An overview of the Thunk.AI platform

A quick tour of all the Thunk.AI concepts

Praveen Seshadri avatar
Written by Praveen Seshadri
Updated yesterday

Thunk.AI is a platform to create and run AI-native applications called "thunks".

What is an "AI-native" application? We've written a whole article about it! It is a modern kind of application that uses AI at its core. It is fundamentally different from a traditional software application that just adds a few AI-based features (like a chatbot or a co-pilot). In contrast, an AI-native application uses AI pervasively to automate work and to make intelligent decisions while doing the work. This new kind of application requires a new kind of application platform.

While there are many different use cases for AI-native applications, the Thunk.AI platform focuses on one particularly useful class of applications --- those that automate "human workflow processes". These processes are common in a business team or organization. For example, a process used to source and hire for a job opening, or a process used by a funding agency to accept project proposals, evaluate them, and decide which ones to fund.

The characteristics of human workflow processes are:

  • Requests are received and processed using a standard workflow or protocol.

  • Each request may require several steps and may take a long period of time.

  • Each step may be complex and subjective, traditionally requiring human expertise (hence the term "human workflow").

  • The workflow may span multiple systems, may involve multiple sources of data, and may involve different people working on different steps.

Using the Thunk.AI platform, AI-native applications can be created and run to implement these processes. AI agents do the routine and repetitive work that human employees would traditionally have done. In brief, the platform automates the workflow process utilizing appropriate AI agent tasks. This leads to significant productivity gains.

Let's look at the key aspects of the Thunk.AI platform:

  1. A user model where every user is paired with an AI agent

  2. An application model for AI-native applications called "thunks".

  3. An intelligent design environment for thunks.

  4. An intelligent execution environment for thunks.

  5. A scalable service that hosts and runs many thunks.


1. User model

Users sign up for an account with Thunk.AI. Each user in Thunk.AI also has an AI agent to assist them and automate their work. The user's AI agent utilizes a generative-AI model to understand and execute work on behalf of the user.

1.1. User roles

There are three user roles within a thunk:

  1. Thunk Owner/Designer: this is a user who authors the logic of a thunk. In a traditional application, this would have needed programming skills. But since Thunk.AI is a no-code intelligent application platform, the application logic is described in natural language. As a result, a broader class of users can design thunks.

  2. Work Participant: this is a user who may be assigned a work step as part of the thunk workflow. The work participant along with their AI agent are responsible for performing that step.

  3. End User: this is a user who can only engage with the thunk by making a request.

Typically, the designers and the work participants are members of the team or organization providing the application or service, while the end user might be an internal or external consumer of the application.

1.2. Duality of AI agent and human user

Duality is a core concept in the design of Thunk.AI. It means that that the user and the user's AI agent have similar capabilities, information, and instructions.

Because of the principle of duality, the AI agents receive the same instructions and have access to the same tools and applications as the human service agents do. In other words, the AI agents are treated "just like human service agents". It therefore becomes easier for human users to understand the work of the AI agents, collaborate with them, correct them, and audit their decisions. Conversely, it also becomes easier for the AI agent to learn and specialize over time to become a better assistant on behalf of the user.



2. Application model

A "thunk" is an AI-native application that receives end-user requests and executes an AI-powered workflow process based on those requests.

2.1. Workflows

Each thunk describes a workflow that represents a business process. The workflow logic is provided at design-time and defines what to do at each step of the process. The thunk owner describes the workflow logic using natural language, articulating high-level goals and outcomes. For example, the thunk owner could say: "for each project proposal, collect past history and reviews of the vendor, check that the proposal covers all aspects of the RFP, and provide an proposal evaluation score according to a specific rubric".

Guided by this instruction, their AI agent forms a workflow "Plan" that has a sequence of steps to execute the process. At runtime, each request will run as a separate workflow instance. The platform will instantiate a sequence of steps to process the request and assign each step to a work participant.

Every plan also has an initial step that can "Collect" workflow requests. These may be explicitly collected via human or AI activity (eg: search the web and find vendors who could be approached to submit project proposals) or they may be collected by reacting to incoming messages (eg: direct submissions from an application form to the thunk so that each submission becomes a workflow request). Each thunk has an "Inbox" so that email messages can be sent to it and a "webhook inbox" so that form submissions can be routed to it. Change events from external applications may also be routed to the thunk via the inbox.

Typically, the thunk owner also provides descriptions of the logic to run for each step in the plan. If the app owner provides more detail, it leads to more reliable and repeatable execution but without as much flexibility. For example, the AI formula column feature provides very granular instructions to compute and record specific values during a step of the workflow. Conversely, if the app owner provides less detail, it gives the work participants and their AI agents more flexibility to make intelligent dynamic decisions.

2.2. AI agents:

Generative-AI models lie at the heart of the Thunk.AI platform. While the default model is GPT-4o from OpenAI, the platform also supports the Claude model from Anthropic and other LLM models. Thunk owners can provide their own API keys for these models, to achieve better control over AI behavior and usage.

All these generative-AI models have some common core capabilities:

  • The ability to have contextual conversational interaction and follow user-provided instructions.

  • "Knowledge" of a broad collection of information from public sources (though usually a few months stale).

  • The ability to understand and generate content (documents, images, videos).

By themselves however, generative-AI models can only respond to instructions with responses. They cannot directly do meaningful work.

The user's AI agent however is far richer than the underlying generative-AI model. The AI agent operates in a controlled runtime environment that the Thunk.AI platform provides. Within this environment, it uses the capabilities of the AI model to do meaningful application work.

  • The AI agent has application intent -- it is steered by the instructions of the app owner to achieve desired outcomes (run the workflow process).

  • The AI agent can access and modify the work environment (eg: the files of the user or internal company databases) and use common productivity tools (eg: browsing the web, sending messages, etc). The platform provides these capabilities via searchable content collections, and via "AI tools" grouped into "AI modules".

  • Where appropriate, in a specific context, the AI agent can dynamically plan and execute a chain of actions to achieve a desired outcome.

2.3. Content collections:

Every thunk can define and populate content collections to provide information relevant to the intended work. For example, a product support thunk may define a collection of product usage documents that can help answer customer support questions.

These content collections can include documents in a variety of formats, images, video, or web pages. They can hold a small focused set of content, or they can hold thousands of documents. The thunk designer can also provide instructions that guide AI agents in the pre-extraction of structured properties from content collections. For example, the authors of a document might be pre-extracted as a structured property.

At runtime, the Thunk.AI platform automatically extracts the appropriate information from each document, and also indexes the content of the whole collection for effective querying. AI agents or work participants can query the content of these collections and retrieve relevant information.

The Thunk.AI platform also pre-defines standardized content collections for organizational policy and workflow knowhow, as these are important concepts across all thunks.

2.4. AI tools and modules:

An AI tool is a skill or capability provided to an AI agent to extend what it can do. Often, tools are a mechanism to connect an AI agent to the rest of the business work environment. Instead of responding to an instruction with just a message, an AI model can respond by invoking an AI tool with appropriate parameters.

Some tools are used to fetch information (eg: search a company database) while other tools perform actions (eg: create new documents or update external systems).

Collections of related AI tools are organized into AI modules. The thunk designer can enable or disable AI modules and tools as needed to steer the desired behavior of AI agents.

Here is a list of platform-defined AI modules available to every thunk:

  • Web module: tools to search the web, answer questions based on web content, read and extract information from web pages

  • File system module: tools to read, create, update, move, and delete files in a cloud file system (Google Drive or Office 365, depending on the user's work environment).

  • Image processing module: tools to edit and query image content

  • Video processing module: tools to edit and query video content

  • Communications module: tools to draft or send email and messages

  • Document templating module: tools to create documents based on parameterized templates

In addition, each thunk designer (with the help of their AI agent) can create their own specialized AI tools (eg: to connect to specific enterprise systems or databases). Some of the tools and content collections in a thunk can also be exported as a reusable module, shared with others, and imported for use into another thunk.


3. Intelligent design environment

During the design phase, the thunk owner/designer is assisted by their AI agent to create a thunk.

  • The owner uses concise natural language instructions to describe the desired workflow. This launches AI agent tasks to generate an appropriate workflow plan and appropriate state management data structures.

  • The thunk designer may describe content collections to build. This launches AI agent tasks to automatically build the content collections, extract appropriate information from them, and create the appropriate retrieval indexes.

  • The thunk designer may describe custom AI tools to build. These requests also launch AI agent tasks to automatically build the tools.

  • The thunk owner may establish connections to external business applications, import AI modules, and modify the various settings that control AI agent behavior.

  • The thunk owner may add and manage work participants and end users who have access to the thunk.

  • At any time during the design phase, the thunk owner can interact with their AI agent via a chat interface, providing instructions or seeking guidance.


4. Intelligent runtime environment

The Thunk.AI platform runtime runs each request as a separate workflow instance. It maintains the state of the workflow instance, launching the appropriate steps at the right times to evaluate the request, record results, and take appropriate actions. Within this execution, there are many tasks that an AI agent (the thunk owner's AI agent or a work participant's AI agent) may need to perform.

  • When new workflow requests are collected, an AI agent task is launched (on behalf of the thunk owner) to parse the requests and create structured workflow requests

  • The Thunk.AI platform then orchestrates each workflow request in accordance with the plan, creating appropriate workflow steps.

  • An AI agent task is launched (on behalf of the thunk owner) to assign each step to an appropriate work participant.

  • An AI agent task is launched (on behalf of the assigned work participant) to execute the instructions for that workflow step. This may spawn several other tasks as part of getting the work done.

  • At any stage while working on a step, the work participant can interact with their AI agent using a chat interface. Any instructions from the work participant during this interaction may lead to new AI agent tasks.

  • When the work participant or their AI agent decides that the workflow step is complete, an AI agent task is launched (on behalf of the thunk owner) to validate the data and the result of the workflow step.

For each of these types of AI agent tasks, the thunk designer has provided upfront instructions and configuration as part of the application model. The Thunk.AI platform automatically launches the right AI agent tasks at the right time in an execution environment with appropriate instructions, AI tools, state, and guardrails to ensure compliance and reliability.


Every thunk has customizable options that control the details of this runtime environment, leading to different levels of "CHARM" attributes (compliance, human-in-the-loop collaboration, automation, reliability, and modularity). These attributes are particularly important for enterprise applications.


5. Scalable service

The Thunk.AI service hosts the platform and runs thunks for all users. It implements user account management and access control. It maintains the plan and data for every workflow instance in every thunk. It runs the individual AI agent tasks within each workflow instance.

The public instance of the Thunk.AI service is hosted in a public cloud and is accessed through a web browser. Corporate customers can choose to have a custom isolated instance of the Thunk.AI service installed in their own cloud tenants. This provides a greater degree of control over the data and processing in their thunks.

As a self-service platform, Thunk.AI allows users to easily sign up and start creating thunks. The platform provides samples as starting points for a new thunk designer. It is easy to add work participants, share thunks with others, and rapidly develop thunk application logic.

As an enterprise platform, Thunk.AI also supports additional capabilities that enable thunk development and deployment in a business environment. These include pre-built connectors to external applications, versioning, team management, auditing, and automated quality control.


This high-level summary is meant to provide a quick overview of the breadth of the Thunk.AI platform. Many of the topics mentioned here are described in greater detail in individual articles at https://info.thunk.ai.


โ€‹

Did this answer your question?