Skip to main content
All CollectionsEnterprise Platform
Enterprise requirements for an AI-native application platform
Enterprise requirements for an AI-native application platform

Describes the "CHARM" tenets of an enterprise platform for AI-native applications

Praveen Seshadri avatar
Written by Praveen Seshadri
Updated today

In an earlier article, we described the concept of AI-native enterprise applications representing a new era of technology with generative-AI models at their core.

A platform for such AI-native enterprise applications has to meet an ambitious set of requirements and provide a broad set of capabilities.

Obviously, AI lies at the heart of an AI-native platform. The design-time as well as run-time platform environment must understand human intent expressed in natural language, process human-consumable content (documents, media, messages, web pages) and interact with other legacy applications that were designed for humans.

The expressive power of the application (the "logic" of the application expressed in natural language) is also important. Enterprise AI-native applications are not just question-answer systems with a search box UI or a chat UI. They typically involve complex interactions with information and actions from other business systems and applications. The application logic needs to be able to describe asynchronous long-running workflow processes, integrating with existing enterprise applications and databases. Read more about the application platform concepts of Thunk.AI and their expressive power.

While it is essential for the platform to be intelligent and expressive, this is not sufficient for enterprise applications. There are five enterprise-specific "CHARM" tenets of an AI-native agentic application platform.

1: Compliance: the platform must comply with a wide variety of enterprise policies, with special consideration for AI guidelines and security concerns.

Obviously, every enterprise application platform must comply with standard IT standards. There are additional considerations when it comes to AI platforms.

  • A primary security concern is to ensure that the underlying LLM does not learn from the data and user interactions of the application.

  • Since AI applications are data-heavy, compliance with data policies is very important. Many enterprises may impose deployment constraints that force the AI-native platform to deploy and run in a sandboxed fashion in the enterprise environment.

  • Since AI is making decisions and taking actions that might otherwise have been done by employees, the AI may need to have the same "employee training" with respect to rules and policies. Some of these policies may be explicit and some are implicit (staying polite in customer communication, conformance to legal requirements, etc.).

  • Access control for AI applications will need to distinguish between application designers, human service agents, and end-users. The programming model will need to clearly define the user identity with which each unit of AI logic runs. For example, if an end-user initiates an application workflow, does it run "as the end user" or "as a human service agent" or "as the application designer" or some other identity?

  • Finally, every enterprise will expect that the AI-native platform can prove these different forms of compliance in a periodic audit process.

2: Human-in-the-loop collaboration between AI agents and human service agents and/or end-users

In an enterprise system, it may be essential for trust, regulatory compliance, or correctness, that certain AI actions and decisions be controlled and influenced by human-in-the-loop engagement. This might take many forms:

  • In some situations, the enterprise may require that a responsible human validate and approve any work done by an AI agent.

  • Some application work may need a workflow where some tasks are performed by AI agents and some by human agents.

  • In some situations, it may be the AI agent that is checking and verifying the work of the human service agent.

  • Across the board, it is necessary for all AI actions and decisions to be explainable and traceable by human service agents after the fact.

The AI-native application platform must provide the abstractions and mechanisms to ensure that these different human-and-AI-agent collaboration models can occur.

3: Automation: automated AI agents result in operational efficiencies.

AI agents are valuable, of course, primarily because they can operate without a human user or human service agent driving the work and vetting the work. In other words, automation results in efficiency and productivity.

  • Work is driven not just by end-user inputs but also by change events in the enterprise environment. The platform needs to integrate with other business systems and trigger work when appropriate.

  • The platform needs a programming model that can register simple logic or complex multi-step workflow logic to be run automatically when events occur.

  • The application platform needs to support fully-automated applications (the AI agent does all the work and only escalates to a human when it requires it) as well partially-automated applications (the Ai agent does some of the work and the human completes or approves it).

4: Reliability: consistent and repeatable behavior is essential to establish trust in an AI application. Further, this reliability should be testable and provable to the application designer.

Generative AI models are probabilistic by design. They respond to inputs with some statistical variance. While this is a desirable feature in creative environments, it is not desirable in enterprise environments. This problem has to be explicitly addressed and compensated for in the design of an AI-native application platform. Consistency and reliability are essential requirements.

  • Does the AI do what it was expected to do? This is the concept of consistency. Some of these expectations are explicit (based on instructions provided) and some are implicit. An AI-native application platform must ensure consistency via mechanisms for granular steering and control by application designers, human agents, and endusers.

  • These need to be coupled with mechanisms to check, verify, and course-correct as needed.

  • Consistency controls are not needed only during execution. They are also needed in a quality-control/testing environment (along with versioning and reliable deployment capabilities) as well as in a post-facto audit environment.

  • Designers, human agents, and end users need to be able to improve the application consistency via interactively scoring/labeling positive and negative results.

  • In enterprise applications, the same inputs should repeatably produce the same results. Minor changes in inputs should not create wild differences in outputs. This is the concept of reliability.

5: Modularity: composition and reuse of customizable logic and content are essential for scalable creation and maintenance of applications.

Complex applications are not monolithic. Multiple people have to work together to implement an application. Applications are built in stages and versions, using some prior capabilities and changing or adding others. A composition or modular programming model is needed. There are different levels at which modularity is valuable:

  • The underlying engine of intelligence may be a single multi-modal LLM or a combination of models, general purpose or fine-tuned. In different enterprise environments, there may be requirements to work only with specific approved models.

  • Integration of existing business applications into a platform is usually a complex (API-based connectors require custom code) and fragile (prone to a number of failure modes) process for legacy application platforms. The AI-native platform needs to make this much easier, handling both the discovery of connection APIs and the data mapping work required when invoking the APIs.

  • Each AI application cannot be expected to be written entirely from scratch. Yet, the underlying LLMs by themselves does not provide any mechanism to compose modular logic. So this has to be provided by the application platform.

  • The presence of a modular programming model enables the creation of an eco-system of shareable and reusable modules. Just as traditional software programs benefit from an eco-system of code libraries and packages that are widely reused, the same concepts have to be supported for "AI logic programs" written in natural language.

  • As modules of AI logic are shared and reused, it is important that the application designer be able to customize and control how modular logic is used by their specific application.

These "CHARM" tenets provide an essential framework to evaluate any AI-native application platforms for enterprise adoption.

Did this answer your question?