Many high-value business processes require access to enterprise data. This article describes how Thunk.AI agents can access and utilize this data in a secure manner.
As background context, it is important to consider three elements of the Thunk.AI platform architecture:
The deployment architecture: Thunk.AI is a SaaS application. It is implemented as a set of software services hosted in the cloud. One of the services is a scalable agent host service which acts as the service environment in which every AI agent executes. Read more about the supported deployment options.
The AI Agent execution model: AI agents use large language models (LLMs) to interpret instructions and make decisions, However, AI agents are always run within a control sandbox environment that limits and checks what they do. Read more about the AI agent execution model.
The use of tools: all interaction between an AI agent and the external environment happens through requested tool calls. The control sandbox verifies the requests, executes the tool calls, validates the results, and then provides them to the AI agent. Read more about access control to tools.
β
Connecting to enterprise applications
AI agents (and the LLM AI models that they use) do not directly have access to any enterprise applications. All such access is controlled by, mediated by, and executed by the Thunk.AI platform through tools.
There are two ways in which a tool can communicate with an enterprise application:
Some tools may be defined to directly access external applications via a REST API, along with the appropriate credentials.
The preferred model is to record application credentials with a Connection, and to reference the Connection as part of the tool definition.
In order for the AI agent control sandbox environment to successfully invoke such a tool, it needs network access to the enterprise application. This may be achieved in one of two ways:
If your deployed Thunk.AI instance is a private deployment in your corporate cloud tenant, then the Thunk.AI platform service has default network access to your corporate applications and databases.
On the other hand, if you utilize one of the other deployment options (for example, the public instance of Thunk.AI at https://app.thunk.ai), you might still want your AI agents to be able to access enterprise applications and databases that are protected behind a corporate firewall. In order to do so, the Thunk.AI platform needs to be able to communicate with those applications (via an API) or databases (via a SQL protocol). While there are complex and custom mechanisms implemented in some environments to enable this (various kinds of "proxy" servers), the most common mechanism is to whitelist the IP addresses of the Thunk.AI agent host services. That whitelist is provided below as of February 2025. Please note that this list might occasionally change as we scale our service or modify our infrastructure.
35.239.191.23
34.16.85.96
34.136.161.249
34.67.244.155
34.44.127.89
34.66.228.101
34.60.199.231
34.30.40.194