Skip to main content
All CollectionsEnterprise Platform
What is an AI-native application platform?
What is an AI-native application platform?

Frames the evolution of AI-native applications in the context of enterprise service trends and AI technology trends.

Praveen Seshadri avatar
Written by Praveen Seshadri
Updated over 2 weeks ago

AI-native enterprise applications lie at the confluence of two major technology trends: the evolution of generative AI applications and the evolution of business services.

We are in the midst of an intense and rapid evolution of generative-AI technology from purely consumer applications towards business applications. The launch of ChatGPT by OpenAI kicked off the first wave of AI apps (chatbots and co-pilots), focused on consumers, and dominated by a chat UI. The second wave that followed a year later (again sparked by OpenAI's "GPTs" platform) started to provide the ability for people to build and share simple AI apps for others.

The third wave of generative-AI applications are "agentic": they add automation and a number of other foundational capabilities essential for applying AI to robust enterprise scenarios.

We are also in the midst of a decades-long evolution of business services away from human-powered implementations to software implementations. Many enterprise services have traditionally been human-intensive requiring human intelligence to deal with dynamic situations and variable data formats and unpredictable inputs. As a consequence, these services have lacked the operational efficiency and scalability of purely software SaaS applications. Yet, while software-powered services have been able to take over some of these processes, they have not yet been able to replace the majority of human-powered services. Over the last year however, generative-AI technology has created the potential for a new class of business application, dubbed "Service as Software".

The new concept of a generative-AI-powered "Service as Software" application implements a business service using AI-enhanced software, thereby achieving the best of both worlds --- intelligence execution as well as operational efficiency.

Whether referred to as "agentic" AI applications or "service as software" enterprise applications, these AI-native applications represent a new era of software technology with generative-AI models at their core.


1: Historical context: the evolution of enterprise services

Traditionally, there have been two kinds of enterprise services --- human-powered and software-powered. Both take requests from users and respond to them. In both cases, there is a person or team that is the "creator/designer" of the service, but they differ entirely in how user requests are processed.

The "human-powered" service

These services have existed as long as businesses have existed. As end-user consumers, we are all familiar with examples of these services --- engaging with a customer service representative over the phone, with a checkout clerk at a store, with a receptionist at a clinic, with a real estate agent, with a librarian when checking out books. Within a business, there are often hundreds of similar internal services and processes.

In all these human-powered services, the service creator defines a process and instructions for human service agents to follow. Users interact with their service agents making requests. The interface for end-user communication is often digital and facilitated by software (email or messaging or telephony or web applications). The service agents may use internal software tools to record and facilitate their work. However, the actual service (understanding the request, making decisions, and acting upon them to respond) is performed by the human service agents.

There are business advantages to this approach:

  1. Compliance: the human agents can be trained on the process, company policy, and regulation. They can be evaluated for performance and incentivized to achieve the desired outcomes.

  2. Intelligence: the human agents are intelligent. They interpret the service creator's instructions and user requests flexibly to achieve the desired goals (eg: satisfy customers, show empathy) while still conforming to the process. They understand the difference between instructions that must be followed and instructions that are only guidelines.

  3. Modularity/Extensibility: When necessary, expert human agents can be consulted for specific requests, and some user requests can be sent off to a different specialized service.

  4. Ease of expression: the service creator can easily describe instructions in natural language and every detail does not need to be described.

  5. Ease of user interaction: end users can easily express what they need and receive responses in natural language.

Of course, the challenges with this approach are:

  1. Cost/Scale: it scales poorly since human agents are expensive and do not work 24x7. Human agents are expensive to train and difficult to retain.

  2. Reliability: it is difficult to achieve consistent quality of service as the instructions are open to interpretation and each human agent interpreting the instructions does so differently.

Over the last two decades, these challenges have been mitigated using a few broad approaches:

  • Lowering human agent cost via outsourcing

  • Improving productivity via standardized software tools for the human service agents. This has led to a variety of "vertical" SaaS products for different categories of human-powered services (eg: Salesforce for sales agents, Intercom for customer support agents).

  • Improving human agent reliability via standardized software templates and metrics incorporated in the vertical SaaS products.

None of these mitigations fundamentally address the core problem that human intelligence is needed for these services, yet human-powered-services are expensive and do not scale well.

The "software-powered" service

Of course, as end-users in the modern world, we are all familiar with pure software services. We no longer book airline tickets via a human-powered service --- instead we book tickets via a software-powered service. Likewise, we order food, buy household items, call for a taxi, and do most of our banking and investment via software exclusively.

In a pure software service, the application logic is encoded into a deterministic software program. User requests are handled entirely by software rather than by human agents.

Not surprisingly, this has the very opposite characteristics of human-powered services. Its advantages are automation, scale, and reliability. For the creator, its disadvantages are complexity of creation (writing code is slow and expensive) and a lack of intelligence and flexibility in responding to requests. All logic is deterministic so it becomes a huge problem to define and code for all the edge cases. For the end-users, the disadvantages are that all logic is deterministic, and so the end-users are expected to understand and adjust to the behavior of the software service. Finally, for the business, this doesn't handle services that are high-value and a human should be involved for reasons of judgment or creativity or compliance.

Again over the last two decades, these challenges have been tackled with some broad approaches:

  • Decrease the cost of creation: low-code and no-code application platforms have emerged, seeking to identify and fastpath the most common application patterns, and reduce the complexity of application creation. An eco-system of modular packages (eg: npm) with associated tooling to make it possible to share and reuse code as building blocks for application development.

  • Improve ease of use: chatbots have been incorporated into the user application, providing the appearance of conversational interaction built on top of a basic request-response mechanism.

  • Scenario-specific intelligence: custom AI models have started to be used within software services to implement specific features (eg: to suggest an article or an ad in a dynamic context). More recently, some software services have added features that internally use the generative-AI features of a large language model. In other words, they treat LLMs as a just another internal mechanism to implement a custom AI model. Search engines like Perplexity, Glean, and Google that provide LLM-summarized answers to questions are good examples of this.

  • Robotic process automation (RPA) took the first steps in the direction of "Service as software". Using RPA, businesses were able to automate some basic categories of human-powered services that involved extremely rote data entry. One can think of RPA as a very specific non-AI method to instruct software how to do a very specific class of human tasks.

The "AI-powered" service as software
And now, the use of generative-AI utilized in automated Ai agents enables a hybrid combination of the best of both approaches. It provides the flexibility and intelligence of the human-powered services, along with the scale and efficiency of the software-powered services.

Let's take a look at how the evolution of AI apps has also converged onto the same capabilities.


2: Historical context: the evolution of AI apps

Over the last two years since the initial launch of ChatGPT, there have been two waves of AI-native applications. There are two distinct ways in which such an application differs from the legacy software service model: natural language user input and natural language design.

Natural language user input: The user interface of the service is conversational and based on natural language. It is therefore intelligent and flexible, adapting to variability in inputs and environment while still achieving the goals of the user.

The first version of ChatGPT and other "intelligent chatbots" belong to this earliest wave. The application is meant to support a single user who is both the source of instruction (the "prompt engineer") and when needed, acts as the human in-the-loop to vet and correct the work of the LLM. The user experience is a direct conversation with the LLM model. Many vertical SaaS products and productivity suites also added this class of chat-based "copilot" as a means to provide their end-users with interactive AI engagement.

This kind of simple single-user interactive application model is inadequate for most enterprise applications. There is no designer providing instructions or shaping the application, and there are no human service agents in-the-loop to engage with the AI model and approve/refine its results.

Natural language application design: The application is created by a designer who describes the application logic via natural language instructions. This is effectively "programming" by describing what the application should do without having to represent that logic in complex software code (an expensive, time-consuming, and specialized process).

The second wave of "assistants", led by OpenAI's "GPTs", provided the beginnings of an application architecture. A typical assistant has a designer or owner who creates it in a design phase (by providing some prompts, some documents for context, and some custom tools to integrate with other systems). The end-user of the assistant gives it requests and the assistant responds in a conversational thread.

These second-wave AI applications still have a simple conversational user model but the application itself has a no-code "programming model". The expressive power of the applications is limited and there are few mechanisms for steering and control of the AI behavior. There is no notion of human service agents in-the-loop to vet the behavior of the AI model. There is no capacity for automated work. If the end-user doesn't interactively drive work and check the outcomes, no work happens.


3: AI-native applications: the convergence of AI and enterprise needs

The third (current) wave of AI applications are now focused on automating work using AI "agents". These agents do work on behalf of the user and/or on behalf of the service agents. The work may be long-running. The work may be triggered by events in the environment (incoming messages or data changes).

The primary motivation for automated applications comes from the desire for productivity via automation. AI agent automation is fundamentally different from traditional automation or workflow applications. In an AI-native application, the logic of the automated process and the individual actions are defined by the app owner in natural language. An AI agent then translates this high-level intent into suitable granular elements that define the application workflow (plan, data, documents, actions, events, etc). Each AI-native application platform defines a particular application model with appropriate elements that provide a certain level of expressive power for the application.

After that, it is the "agentic" runtime that runs automatically. Human agents may be "in-the-loop" to approve or augment the AI-driven work, when needed.



In an enterprise environment, this isn't enough however. Any important enterprise AI application also has demanding new requirements specific to the use of AI. We describe these as the "CHARM" attributes: Compliance, Human-in-the-loop collaboration, Automation, Reliability, and Modularity.

An AI-native enterprise application uses the power of generative-AI to react and respond automatically and intelligently to dynamic and partially specified instructions and inputs (a document, a user message, a web page, an image, and many combinations of these). It combines the flexibility of human-powered services with the operational efficiency of software services.

This new era of applications requires a fundamentally "AI-native" application platform like Thunk.AI.

Did this answer your question?