An earlier article described how AI agents in Thunk.AI run within a control sandbox "AI Guardian" environment. The AI agents and the AI Guardian environment act with the identity of the thunk owner. All interaction with the external world and the business environment (access to data and files or updates) occurs via tool calls executed by this sandboxed execution environment. In other words, all tool calls are made with the identity of the thunk owner.
From an external perspective, a thunk workflow runs the thunk owner's instructions on behalf of the thunk owner. This invokes tools using the thunk owner’s identity or the credentials (e.g., database username and password) that the owner configured for the thunk modules.
Tool calls can be restricted in various ways (e.g., limiting API parameters or exposing only specific folders in a file system). This allows thunk owners to grant minimal capabilities to human agents and the AI agents—without needing complex, or often unsupported, granular access control on external business applications.
Each AI agent in a thunk can only access the data and perform the actions that the thunk owner is authorized to do and that the thunk owner has specifically allowed via tool configuration. There is no way for an AI agent to enter a more privileged mode or escalate its permissions beyond those that the thunk owner has specified.
