Every step of a workflow plan is controlled by AI logic defined by AI Instructions. The same format of AI Instructions also controls other parts of a thunk where intelligent decisions may be needed:
Inbound requests via webhooks or email are processed using AI logic.
Custom AI tools are defined using AI logic.
Intelligent properties in content folders are defined using AI logic.
Assignment of steps to human agents can be based on AI logic.
Most people are familiar with the concept of natural language prompts used to instruct an AI system. At the simplest level, an AI Instruction in Thunk.AI might be considered a “prompt”, but it has further structure and detail in order to steer and control the AI agents effectively. The components of an AI Instruction are:
Directions (or Prompts)
Examples
Input and output state bindings
Tool configuration
Human-in-the-loop settings
Execution settings
Directions
The directions are natural language prompts that describe the problem to be solved and how it should be solved. This section is required. The directions could have different levels of specificity. For example, here are a few different ways in which directions can be provided for a workflow step that needs to find the open working hours for a specific coffee shop.
“Find the open working hours for the shop”
“Find the open working hours for the shop. Check the operating procedure documents to find out how to do this.”
“Do a web search to find the open working hours for the shop”
“Use a Google places search to find the open working hours for the shop”
Each of these is a valid way to provide directions to an AI agent. Each has a different level of specificity and detail. There is usually a strong correlation between greater specificity and greater reliability.
The directions do not have to just include text. It is quite common to include links to relevant files (eg: “look at this spreadsheet file … to map expenses to internal accounting categories”)
Examples
AI Instructions can optionally include positive (what to do) and negative (what not to do) examples. AI agents are good at following patterns, so the examples serve as patterns to share with the AI agent.
This is particularly useful for situations where an example is much easier to provide than a textual description:
When you need to provide an example of an input file
When you need to provide an example of an input file, annotated to indicate important sections or details
When you need to describe a desired result format.
Input and Output Bindings
The AI Instructions can constrain which subset of the workflow state is used as inputs to a particular step. This is valuable in focusing the AI agent on specific data inputs.
Likewise, the AI instructions can include output state bindings that specific a subset of the workflow state that should be the outputs of a step. Since the state is composed of schematized properties, this is effectively instructing the AI agent that it is expected to compute and record those specific properties. The name, description, and type constraints of the properties act to steer and constrain the AI agent to act more reliably and consistently.
Tool Configuration
Every AI instruction can disable some tool libraries and individual tools within those tool libraries. Constraints can also be added to each individual tool.
Execution Settings
Various configuration options include:
Choice of AI model: the default choice is set at the thunk level but can be overridden in specific AI instructions
Use of dynamic planning: useful when each agent execution needs to make different choices of tools and execution order because the overall instructions are broad or the inputs are varied
Use of scheduled execution: useful if AI logic needs to run on a periodic schedule for a long-running workflow step
Use of delayed retry if an agent failed for some reason: useful for handling transient errors in external systems
Human-In-the-loop Settings
As described in the section on Users and Access Controls, a thunk can have many users registered as “human agents”. It is important to have granular mechanisms by which human-in-the-loop engagement can be managed. Various configuration options include:
Step assignment: Every step of the workflow plan can define how individual instances of the step should be assigned to specific human agents. For example, in a thunk that implements an expense receipt approval process, the second step doing currency conversion could specify: “This step should be assigned to Tony or Galex”.
Require approval for step start and finish: specify if the step can start and finish automatically without explicit approval from the human agent.
Require approval for conversation responses: specify if the agent can auto-respond in external conversations without explicit approval from the human agent.
Restrictions on what the human agent can tell the agent: pre-canned responses for human agents and permissions that allow the human agent to provide dynamic instructions to the agent.
