We recognize that integrating AI agents into your workflows introduces unique security challenges beyond those of traditional software systems. We have taken extra security measures specifically designed to address these AI-specific concerns, ensuring the safety and integrity of your data while harnessing the full potential of AI automation.
Platform mitigations of common risks
Implement Transitive Authentication: The identity of the thunk owner is transitively used for any tools calls that invoke APIs.
Implement Least Privilege Access: The thunk owner/admins can limit the AI tools available to every agent task. By doing so, they control the surface area of available access to external data and resources.
Regularly Audit and Review Access Privileges: Thunk owners should conduct these reviews as part of the recommended development and lifecycle management process.
Platform Monitoring and Logging: The Thunk platform uses Open Telemetry collectors and cloud specific telemetry and monitoring platforms to record, report, and alert on authentication and authorization events. All tool calls are logged along with their parameters and their results.
Here’s a detailed look at how we’re tackling some distinct challenges posed by AI integration.
Principle of Least Privilege: Mitigating AI Manipulation Risks
A key concern with AI agents is the potential for them to be misled or manipulated into performing unintended actions, similar to how a human might be manipulated. Our robust back-end architecture is specifically designed to address this risk:
Strict Credential Binding: Our back-end architecture prohibits running agent code with broad, unchecked access. Instead, all AI agents run with the exact same credentials as the human user who creates them. This human user is always the owner of the thunk that defines the AI automation. This means each AI agent can only access data and perform actions that its human counterpart is authorized to do.
Inherent Jailbreak Protection: This architecture inherently mitigates potential “jailbreak” attempts – a unique risk in AI systems where malicious actors try to coerce the AI into performing unauthorized actions. Even if an AI agent is somehow manipulated into attempting an unauthorized action, it will fail due to lack of necessary permissions.
No Privileged Mode: There is no way for an AI agent to enter a more privileged mode or escalate its permissions beyond those of the invoking user. This strict limitation is hardcoded into our system architecture.
By implementing these principles, we create a robust security barrier that prevents scenarios where a compromised or manipulated AI agent could access or modify sensitive information beyond its intended scope. This addresses a key vulnerability specific to AI-driven systems while ensuring that AI agents remain powerful tools within their authorized domains.
Multi-Layered Input and Output Sanitization: Guarding Against AI Manipulation
AI agents, especially those based on large language models, can be susceptible to prompt injection attacks or may generate unexpected outputs. Our multi-layered approach addresses these AI-specific vulnerabilities:
Input Sanitization: All inputs to the AI agents undergo thorough checking and sanitization. This process guards against prompt injection attacks – a unique threat in AI systems where carefully crafted inputs could manipulate the AI into performing unintended actions.
Tool Call Validation: When AI agents make calls to tools or external services, these calls are meticulously monitored and validated. This step is crucial in preventing scenarios where a compromised or malfunctioning AI agent could attempt to misuse tools or access unauthorized services.
Output Verification: We offer options to configure AI logic as a “judge” to evaluate the sanity and appropriateness of outputs generated by tool calls and by the primary AI agent. This additional layer of AI-driven scrutiny is essential in catching subtle anomalies or potentially harmful content that traditional rule-based systems might miss. It’s particularly effective against AI hallucinations or instances where the AI might generate plausible but incorrect or harmful information.
This comprehensive approach ensures that both the inputs to and outputs from AI agents are secure, mitigating risks specific to AI-driven systems such as data poisoning, output manipulation, or the generation of misleading information.
Controlled Interaction with All Systems: Preventing Unauthorized AI Actions
AI agents, especially those designed for automation, have the potential to interact with various systems, which could lead to unintended consequences if not properly controlled. Our solution to this unique challenge is a rigorous “tool-only” approach that extends to all interactions, both within and outside the Thunk system:
Strict Tool-Based Interaction: AI agents cannot directly change anything within the Thunk system or affect the external world. All actions, whether internal to Thunk or external, must go through an AI Guardian layer that vets and gates all invocations of and results from invoked tools.
Comprehensive Validation: Each tool interaction undergoes thorough validation to ensure it aligns with expected behaviors and authorized actions. This validation applies to all operations, regardless of whether they affect Thunk’s internal state or external systems.
Complete Action Traceability: Every tool use is meticulously logged, creating a comprehensive audit trail of all AI agent activities. This ensures that every action taken by an AI agent is traceable and accountable.
This architecture acts as a critical safeguard against one of the most significant risks in AI automation: the potential for an AI system to take actions that are technically possible but organizationally undesirable or potentially harmful. By requiring all actions to pass through the AI Guardian layer, we create multiple layers of protection against unauthorized or unexpected AI behaviors, both within our system and in interactions with external environments.
Customer-Controlled LLM Integration: Enhancing Data Sovereignty
We understand that data control and compliance are paramount concerns when integrating AI into your workflows. To address this, we offer options that put you in control of your data’s journey:
Option to Use Your Own LLM Provider API Keys: Customers have the flexibility to provide their own Language Model (LLM) provider API keys. This option ensures that you have visibility and control over all data shared with the LLM provider.
Leverage Existing Agreements: If you have pre-existing agreements with LLM providers, using your own API key allows you to extend the same contractual safeguards and compliance measures to the data processed through our platform.
Enhanced Data Sovereignty: This approach significantly enhances your data sovereignty, allowing you to adhere to specific data handling requirements, regional regulations, or industry-specific compliance standards.
Commitment to Transparency: Addressing AI-Specific Concerns
The rapidly evolving nature of AI technology and its associated risks necessitates a strong commitment to transparency:
We provide regular updates on our security measures and any enhancements we implement, keeping you informed about how we’re addressing emerging AI-specific security challenges.
We maintain open channels for customers to raise concerns or ask questions about our AI security approach. This is particularly important in the AI space, where new types of vulnerabilities or attack vectors may emerge rapidly.
By fostering this open dialogue, we ensure that our customers are always aware of how their data is being protected in the context of AI-driven automation, addressing the unique trust challenges posed by AI systems.
For more information about our AI-specific security practices or to discuss your particular security needs in the context of AI automation, please don’t hesitate to contact our security team at [email protected].
Deployment Security
Deploying Thunk.AI to private cloud locations allows organizations to maintain control over their data and infrastructure, which is particularly beneficial for industries with stringent data privacy and security regulations. By leveraging any cloud provider, Thunk.AI can be seamlessly integrated into existing cloud environments, providing a consistent and unified platform for AI-driven automation. See our deployment options for more information.
End-to-end security and encryption are integral to Thunk.AI's deployment strategy, safeguarding data both in transit and at rest. This comprehensive approach to security ensures that data integrity and confidentiality are maintained, meeting the highest standards of data protection.
