Skip to main content
AI at work: 101

An informal description of advances in AI and how they apply to work productivity

Praveen Seshadri avatar
Written by Praveen Seshadri
Updated over 6 months ago

There's much excitement about recent advances in AI. At Thunk.AI, we are applying these AI advances to build new software that will automate routine work. This improves productivity (how fast things get done and how much gets done) and more importantly, frees up the time for people at work to do more meaningful, satisfying, and impactful things.

This article explains how new AI technology relates to work productivity.

The basic approach to using AI in software applications is easy to describe. The software developer trains an AI model and programs the application to use the AI model to make AI-powered decisions. Then the application exposes these capabilities in way that is useful to the end-user.

"Old" AI

Here's how it _used_ to work until recently:

  1. You collect "training data" --- these are a number of observations of the real world that you want the AI to emulate. For example, if you wanted to use AI to decide if a message was rude, you might record what 100,000 human beings decide ("Rude" or "Polite") when presented with different messages.

  2. You use this "training data" as examples to train an "AI model" using complex mathematical techniques. The AI model captures and records the statistical patterns found in the training data.

  3. Then the AI model acts like a software function using those statistical patterns. When given a new message, it responds with "Rude" or "Polite".

Of course, there is a lot of science and engineering and technical expertise that goes into step #2 of this, and there is a big investment in computing hardware for steps #2 and #3 of this. However, the biggest challenge has always been in collecting representative large data sets (step #1). That is why AI was typically used only by the largest companies that have access to very large data sets representing human behavior.

Generative AI + Foundation models

Recent advances in AI technology (which often get assigned the umbrella term "Gen-AI") have two really fundamental changes to the old approach: they are generative and they use foundation models.

Generative models look at a sequence of inputs and generate (predict) a sequence of subsequent outputs. The most well-known application demonstrating this is ChatGPT. IT takes a sequence of words (messages) and produces the next set of words (a response). Underneath, there is a generative AI model simply predicting the next words of its response in a sequence. Its human-like "intelligent" responses arise from two things: (a) the new mathematical approach used to train the model and then generate the responses --- this is the "generative pre-training transformer" or GPT algorithm, (b) the massive amount of data used to train the model. It is trained on pretty much all the publicly available text in the world.

The term "GPT" represents this class of generative AI models.

Just like generative text models, something analogous has also emerged with generative image models. This enables apps like Dall-E that can create images.

Another crucial distinction is that these are foundation models. You no longer need a separate AI model to do customer support conversations for company A and a separate AI model to do a shopping assistant for company B. Instead, a single foundation model is sufficient, as long as you can give it the context about the specific scenario you want to steer it towards. This requires a reasonably short description using natural language. The impact of this change is immense, because it now makes AI accessible to a vastly broader audience for a wide variety of use cases. No data set needs to be collected, no training needs to happen. The barriers to entry have been drastically reduced.

OpenAI is one well-known provider of this technology and they have Ai models like GPT-4. There are other competing models (Llama, Mistral, etc). Finally, there are providers that host these models in the cloud making it easy for others to use.

What problems should I apply AI to?

Once you get past the fun of asking ChatGPT to write a limerick about Einstein or paint an astronaut riding a horse, you face a pragmatic question: how can this help with your work? There are two aspects to consider when choosing problems for AI to help with:

  1. There should be a foundation model available. For example, if you want an AI model to predict exactly when your specialized equipment will fail, that may not be possible with a generic foundation model. But the good news is that companies like OpenAI provide "multi-modal" models that understand human text, speech, images, video, and vision. That covers much of the basics of standard "human" work interactions. These AI models are really good at reading text (files, documents, messages) and understanding the content of images.

  2. The impact on your work should be meaningful. You will need to invest effort in exploring this new technology and introducing it into your work processes. While platforms like Thunk.AI simplify this a lot, it is still sensible to focus on processes or projects that matter.

Here's a simple way to discover good potential use cases: does your work involve a repetitive mundane computer task that an intern could do, yet you have teammates or yourself spending many hours doing it manually? For example, data entry from paper forms, or checking submissions to see if the data is correct, or reading documents to see if they fulfilled some criteria, or answering repeated routine questions. These are great candidates for AI to help with. You'll need a platform like Thunk.AI that delivers these new AI models with predictability (at work, you often need predictability more than creativity) and automation and ease of use.

Conversely, here's some problems to avoid: do not expect AI to solve problems that are too complex for experienced people on your team. For example, don't expect AI to figure out how to restructure your organization to be more efficient, or to find savings via a deep analysis of your finances. That's not where the foundation models will help you. At least, not today.


So keep it simple, apply AI to repetitive mundane tasks, and free yourself up to do higher-value, unique, and satisfying work.

How do I apply AI to a specific problem?

If you've used ChatGPT, you know there's three things you need to do:

  1. You have to give ChatGPT the information needed about your specific problem. This context is often called "the prompt".

  2. Then you engage in a back-and-forth conversation with ChatGPT to reads its responses and refine the prompts.

  3. Finally, you take its results and copy that back into your work environment.

That works fine for a single one-off use or for things you have to get done very occasionally. But it doesn't help you do things automatically, repeatedly, predictably, and 24x7.
โ€‹
The term "AI agent" refers to a layer of automation on top of the basic AI models that does these three steps automatically for you. An AI agent takes the problem you want to solve and (a) creates the prompt for the AI model, (b) engages in the back-and-forth discussion with the AI model to steer it to the desired useful conclusion, (c) integrates the result back into your work environment. It is like an assistant that engages with and supervises the AI model to ensure that the right work gets done the right way.

You need an AI platform that makes it easy to set up and use AI agents for your work. That's where Thunk.AI fits in. Thunk.AI is a self-service platform that makes it easy for you to set up and run AI-powered projects, processes, and applications at work.

Thunk.AI makes it easy to apply AI to your work. 

You do not need to be an AI expert, a data scientist, a software engineer, or a "prompt engineer". In fact, you do not need to know much more than the concepts in this article.

Did this answer your question?