The benefit of AI agents is the enormous efficiency that can be gained by the intelligent automation of work. The desired characteristics of AI agent systems are:
Autonomy -- the freedom to decide what to do
Agency -- the freedom to act without seeking approval
In a business environment, process workflows needs reliable behavior -- predictable, consistent, compliant, and transparent. We know however that AI agents are based on AI large language models ((LLMs), that these models are probabilistic, and can make mistakes. Consequently, it is challenging to apply AI agents to business workflows.
There are inherent tradeoffs between reliable behavior and the degree of AI agency/autonomy. With most AI agent platforms and implementations, the "reliability boundary" excludes most high-value business workflows.
In this article, we describe how the Thunk.AI platform expands the reliability boundary through the concepts of Controlled Agency and Controlled Autonomy. A measure of this reliability and consistency is captured by the Task Reliability Benchmark.
The assumption is that a business workflow has to run repeatedly in similar but somewhat different contexts and with somewhat different inputs. There are four expectations of a reliable workflow in such a context:
It achieves a desired outcome in each instance.
It follows a prescribed process in each instance.
To the extent the environments and inputs vary and to the extent the prescribed process doesn't specify what to do, it makes intelligent decisions as appropriate.
It automates and executes its decisions in a transparent way
There is no silver bullet solution for AI agent reliability. Instead, a set of core principles need to be applied, taking a "defense in depth" approach to ensure that relatively few errors occur. When they do occur, the errors must be detected and corrected.
There are "meta"-design principles that guide the Thunk.AI platform:
Planning
Before any work is done, its intent and plan are explicitly articulated --- either by a human or by AI or by a combination thereof. For work that should be repetitive, the intent and plan are persisted and reused for consistency.
Work is divided into hierarchical units of smaller granularity, reducing the scope of agency and autonomy needed for each unit of work.
Each granular unit of work is individually configured for an appropriate level of autonomy and agency.
Execution
All autonomous decisions are verified before being committed -- either by a human or by AI or by a combination thereof.
All agentic actions are approved before being executed --- either by a human or by AI or by a combination thereof.
All non-deterministic agentic actions are verified after before being executed --- either by a human or by AI or by a combination thereof.
Design Principles during the Planning Phase
At the level of entire AI workflow (called a "thunk"), there is an explicit planning phase that precedes the execution of the workflow.
The purpose of planning is to capture appropriate intent (steering and control) -- both at the coarse level of of the whole workflow and at the finer level of individual granular tasks that the AI agent may be asked to run.
Planning is a joint activity between the thunk designer (human) and a design-time AI agent. In this phase, the platform provides many exploratory options (greater autonomy and agency in a prototype period) and the thunk designer can make choices that achieve the right balance between control and flexibility. The thunk designer can choose to define workflows with a lot of control (over agency and autonomy), or workflows with very little control.
Controlled Autonomy
Principle of static planning: when work is explicitly planned and a plan ( a sequence of steps) is articulated, it provides a process guideline for repeated consistent execution. More detailed process guidelines lead to more consistent results.
Principle of minimal granularity: the broader the instructions given to the LLM and the broader the context it has to interpret, the more variability there will be in the results. Therefore, to achieve reliability, the platform gives the thunk designer the ability to specify the "tightest" (most granular) instructions and context.
Controlled Agency
Principle of maximal constraints: the broader the possible set of responses from an LLM in a particular granular context, the greater the variability of those responses. Therefore, to achieve reliability, the platform gives the thunk designer the ability to independently restrict the LLM to the "tightest" (most limited) set of allowed responses in each granular work context.
Principle of minimal capability: LLMs interact with the business environment through "tools" to read or update content. These tools are provided by the Thunk.AI agent platform. To achieve reliability, the platform gives the thunk designer the ability to independently specify the "tightest" set of tools for each granular work context.
The workflow plan in a thunk has many granular components to it: it has a sequence of workflow steps and it defines schematized state that the workflow should maintain. Every step of work is granular and the degree of granularity is in the control of the thunk designer. Finer granularity leads to more specific process. Coarser granularity leads to more flexibility in dealing with dynamic environments. The actual choice of workflow step granularity depends on the thunk designer relfecting the needs of the particular business process.
Each granular step of the workflow includes detailed AI agent instructions. These instructions have two elements -- steering (what it should do, includes examples which are particularly useful in guiding an LLM) and control (what it is allowed to do). The critical aspect influencing reliability is control. For example, the input and output data properties are specified and each of them has specific constrained types. This reflects the Principle of Maximal Constraints.
At runtime, the AI agent engages with the LLM in an iterative conversational loop, but only allows it to respond by invoking one of the tools provided. Free text responses (one of the greatest sources of randomization) are not allowed. The set of tools provided to the LLM in every conversational iteration is restricted by the thunk designer as part of the planning process, reflecting the Principle of Minimal Capability.
Design Principles during the Execution Phase
AI workflow execution is orchestrated by the Thunk.AI platform. This execution phase is primarily an automated phase of execution. The reliability of AI agents during this execution phase is largely based on the choices made during the planning phase and they are improved by implementation design decisions made by the Thunk.AI platform
Controlled Autonomy
Principle of dynamic planning: most individual AI agent tasks require multiple iterations and tool invocations. Dynamic micro-planning of individual AI agent tasks increases the reliability of agent execution.
Principle of explanation: The platform requires the LLM to provide reasoning for its responses that is consistent with the plan and goals of the workflow. This explicit explanation forces greater consistency with the intended workflow process.
Controlled Agency
Principle of checkpointing: The platform requires the LLM to update schematized state with its partial progress or results. This improves alignment with the desired outcomes, increases reliability, and makes the work transparent to users.
Principle of verification: The platform checks every LLM response for validity. This creates an opportunity to correct and refine results. There can be a variety of checks, including deterministic checks (eg: for schema conformance), checks implemented by LLM calls (eg: for semantic conformance to constraints), and human-in-the-loop verification.
Since every individual task execution involves (a) potentially multiple iterations with the LLM, (b) multiple tool calls, (c) variable environments and inputs, the Thunk.AI platform always starts with "micro-planning" the task. This reflects the Principle of Dynamic Planning. The dynamic micro-plan is itself constrained by the available tools and by the data bindings specified during the initial planning phase, so it creates a further level of detail for subsequent execution. By explicitly requiring articulating of the micro-plan, the AI agent platform steers subsequent stages of the iteration in a consistent direction.
Every response from the LLM is a tool call with arguments and importantly, an explanation. This reflects the Principle of Explanation. There are three benefits of these explanations. One important benefit is that the explanation increases the alignment of the LLM's immediate response with the desired goal and plan. In effect, the requirement to provide a rational explanation acts as a constraint on the response of the LLM. A second benefit is that the explanation reinforces alignment of subsequent LLM responses with the plan. Finally, the explanations are useful for human validation.
The platform steers the AI agent to checkpoint its work and update the workflow state as work progresses. Since the workflow state is schematized and structured, this imposes constraints on the output of the LLM. This reflects the Principle of Checkpointing. Just like the principle of explanation, this increases alignment of the LLM's responses with the desired outcomes.
Finally, every response of the LLM, every tool result, and every workflow step is checked for consistency. This reflects the Principle of Verification. If the verification identifies inconsistencies or inaccuracy, these are fed back to the LLM for correction. There are many kinds of verification. Conformance with schema and structure are the most obvious and deterministic. More subjective verification is very valuable also -- for example, whether an LLM response conforms to policy, or whether an LLM response satisfies the descriptions of tool arguments or of workflow state property descriptions.
A measure of the Thunk.AI platform reliability for this task-level granular agent activity is captured by the Task Reliability Benchmark.
The platform also offers human-in-the-loop approval as a final category of verification that can optionally be required.
End-to-end workflow reliability
In practice, the end-to-end reliability of AI agent automation depends on a combination of four factors:
The nature of the workflow process --- how specific the process is and how much "intelligent" decision-making is expected from AI agents to handle variability of inputs and contexts.
The level of instruction details provided by the thunk designer during the planning phase
The inherent reliability of the AI agent platform in following the plan, adhering to provided instructions, and controlling the LLM's responses towards the desired outcomes (the primary focus of this article!)
The actual degree of variability of the runtime workload.
β