Skip to main content
All CollectionsEnterprise Platform
AI Agents working with Enterprise Data
AI Agents working with Enterprise Data

Allow AI agents to leverage your enterprise data securely.

Praveen Seshadri avatar
Written by Praveen Seshadri
Updated over a week ago

Many high-value business processes require access to enterprise data. This article describes how Thunk.AI agents can access and utilize this data in a secure manner.

Where does Thunk.AI run

Thunk.AI is a SaaS application. It is implemented as a set of software services hosted in the cloud. One of the services is a scalable agent host service that is the environment in which every AI agent executes.

The public instance of the Thunk.AI SaaS service (accessible at https://app.thunk.ai) is hosted in the Google Cloud Platform (GCP) public cloud. When you sign in and use this service, all work happening on your behalf (including any AI agents working on your behalf) is executed by software running inside this agent host service on GCP.

Some enterprise customers own and run their own "private cloud tenant" -- essentially, they rent a private set of cloud computing resources from one of the large cloud providers (AWS, Microsoft Azure, or GCP) and control it more tightly than the corresponding public cloud services. These customers typically prefer that any services like Thunk.AI should be deployed within the private cloud tenant. Thunk.AI also supports such private instance deployments across all three of the major cloud providers. As part of such an enterprise customer, when you sign in and use the Thunk.AI service, all work happening on your behalf (including any AI agents working on your behalf) is executed by software running inside the agent host service on the specific private cloud tenant.

AI Agents vs LLMs

It is important to distinguish between the AI agents and the large language model (LLM) that powers them. The LLM used in Thunk.AI could be GPT-4o or Claude Sonnet or Google Gemini or eventually other LLMs. These are typically hosted in their own cloud services. In a private cloud tenant environment, they could be hosted within that specific tenant.

An AI agent sets up the appropriate context and messages to send to the LLM. It then interprets the response of the LLM, takes some actions based on that, and then repeats this iteratively until it decides that the work is done. The LLM can only respond by asking the AI agent to invoke one more "AI tools". The tools are a way for the LLM to indicate that some extra information should be fetched.

The important takeaways are:

  • The LLM never accesses data directly. Any information needed by the LLM is provided by the AI agent, either as part of setting up the environment or by responding to tool call requests.

  • The LLM never updates data directly. Any such changes are made by the AI agent by responding to tool call requests.

AI tools are the mechanism by which AI agents access all data.

Connections, AI Tools, and Tool Modules

AI tools are built into the system in functionality bundles called "Modules". There are four kinds of modules:

  1. System modules: every account has a default set of AI tool modules with capabilities like web search and browsing, email drafting, image and video processing, and document processing.

  2. Connection-specific modules: every user can augment their account with connections to other systems. For example, a user may add a Google Drive connection and this automatically enables an AI tool module that allows that user's AI agents to read and write files and spreadsheets and folders in that Google Drive. A variety of other connection types are supported -- to SQL databases and to common SaaS applications. The connection is a central place to record user credentials to establish access to the external system. The AI tools module associated with each connection is a way to make that external system accessible to the AI agents.

  3. "This Thunk" module: within every thunk, there can be custom AI tools defined. These can utilize the available connections but implement very specialized functionality. Custom tool categories include API tools that can invoke any REST API using HTTPS, database tools that invoke any SQL commands with a database connection, and AI tools that combine natural language instructions with existing tools to build intelligent composite tools.

  4. User-defined modules: this is an extensibility mechanism that allows users to define tools in one thunk and share them across many other thunks (of their own or for access to other users as well). There are many benefits, especially when teams of users are implementing many AI agent automation processes.

When a thunk is being designed and configured, the choice of enabled modules (and enabled tools within those modules) is an important dimension of control for the thunk owner to ensure that each step of the workflow and each activity run by AI agents follows the desired process.

Access control to AI tools

Every user in the Thunk.AI service has an identity based on their account/sign-in. Each thunk implements role-based access control -- there is a thunk owner, there are admins who can edit the AI logic and instructions of the thunk, there are participants who can be assigned specific workflow steps, and there are end-users who can only submit workflow requests.

Within a thunk, the owner, the admins, and the participants can "do work", and therefore each has an AI agent to assist them in their work. A user's AI agent has access to the AI tools and modules within the thunk. The platform runs the AI agent logic with the identity of the user.

When a thunk is exported as a module, the tools exported with it may be used in another thunk by any of the users based on the same access control rules. For example, if I am a participant in thunk A and have access to a specific AI tool for some work, then if I build another Thunk B, I can import and use that AI tool without needing to know how exactly it works or even have access to the underlying data. All that matters is that the owner or admins of Thunk A chose to export it as a shareable module.
​

Accessing enterprise data from the public cloud instance

If your Thunk.AI instance is a private deployment in your corporate cloud tenant, then the Thunk.AI agent host service has network access to your corporate applications and databases. The standard mechanism of tools and modules works well.

On the other hand, if your Thunk.AI account is part of the public instance of Thunk.AI (https://app.thunk.ai), you might still want your AI agents to be able to access enterprise applications and databases that are protected behind a corporate firewall. In order to do so, the AI agents need to be able to communicate with those applications (via an API) or databases (via a SQL protocol). While there are complex and custom mechanisms implemented in some environments to enable this (various kinds of "proxy" servers), the most common mechanism is to whitelist the IP addresses of the Thunk.AI agent host services. That whitelist is provided below as of February 2025. Please note that this list might occasionally change as we scale our service or modify our infrastructure.

35.239.191.23
34.16.85.96
34.136.161.249
34.67.244.155
34.44.127.89
34.66.228.101
34.60.199.231
34.30.40.194

Is it safe to trust the Thunk.AI service?

We articulate our technical approach to safety and security at length in this article. As expected of any trustworthy enterprise software provider, we also have well-documented corporate and engineering policies, and perform regular compliance audits. We are happy to share compliance audit reports on request.

Did this answer your question?