Terminology
Welcome to our AI Terminology guide. Here you’ll find clear, concise explanations of key AI concepts and terms. Click on any entry to learn more.
A2A Agent to Agent
Agent to Agent or A2A allows other AI agents the ability to work together even if they’re from different companies.
Picture your business using several AI assistants—one for customer queries, another for inventory, and another for HR. Instead of working separately, they could communicate and collaborate together using the A2A protocol.
Agent to Agent (A2A) is protocol purposed by Google and supported by other software and technology providers.
AI Agents
AI Agents are sophisticated AI systems designed to operate with greater autonomy, making decisions and taking actions to accomplish goals with minimal human supervision. Unlike traditional AI systems that respond to specific inputs, Agent AI can independently plan, reason, and execute complex tasks.
Credit Assignment Problem
What it means - Figuring out which specific actions were responsible for a good or bad outcome.
Think of it like - If you win a football game, was it because of the first goal, the last-minute save, or a strategy from earlier?
Experience-based Learning (Era of Experience)
What it means - AI learns by doing, trying things out, and learning from outcomes, instead of only copying humans.
Think of it like - Learning to cook by experimenting in the kitchen, rather than just following a recipe book.
Frontier Model
Frontier AI, sometimes referred to as Frontier Models, are highly advanced AI systems with significant capabilities. Like how each new Apple or Andriod phone pushes the boundaries of what smartphones can do, frontier models represent the cutting edge of AI technology.
Characteristics & Considerations
- Highly capable general-purpose AI models that can perform a variety of tasks, i.e. multimodal capabilities, processing text, images, audio, and video.
- Companies like Anthropic (Claude), Google (Gemini), Microsoft (Copilot), and OpenAI (ChatGPT) now refer to their most advanced AI systems as “Frontier Models.”
- Frontier models are powerfull, capable of zero-short learning, and very scalable but can be costly.
- Will always be more powerful than locally run or open-source models, as these are cutting-edge technologies developed by major AI companies.
- Since large technology companies operate these models, it is essential to consider data security and compliance. Not only should the use of AI adhere to data protection regulations (GDPR, CCPA, DPA, etc.), but additional steps must be taken by the businesses themselves. This could be masking or anonymising sensitive information before sending it to AI systems.
Summary
The key thing to remember is that frontier models aren’t just slightly better versions of older AI - they represent significant leaps forward in capability, kind of like the difference between a basic calculator and a modern smartphone. They’re tools that, when used thoughtfully, can help businesses operate more efficiently and innovatively.
Grounding
What it means - The idea that AI should understand the real-world consequences of its actions.
Think of it like - Knowing that a cake recipe is good not just because it sounds good, but because someone actually baked it and it tastes good.
Large Language Models (LLMs)
Large Language Models (LLMs) are a class of artificial intelligence models designed to understand, generate, and interact with human language. They function like a multilingual expert who has not only read millions of books but they can quickly recall and apply that knowledge. They have a deep understanding of grammar, style, and context—so you can rely on them to draft, edit, or summarise text in almost any subject.
They have been trained on extensive text data, which enables them to understand context, generate relevant responses, and adapt their communication style to fit different situations.
Model Control Protocol (MCP)
Model Control Protocol (MCP) is a standardised communication framework designed to manage and control AI language models and their interactions. This protocol plays a crucial role in establishing reliable and secure connections between applications and AI models.
One-Shot Learning
One-shot learning refers to an artificial intelligence system’s ability to correctly identify or handle a task after being shown just one example during inference or training.
The term “one-shot” highlights that the AI only needs a single instance to learn how to perform a specific task. This is similar to how humans often learn a new skill or concept after seeing just one demonstration.
- Allows AI models to generalise from a minimal amount of data.
- It differs from zero-shot learning, which uses no examples.
- Typical applications include image recognition, natural language classification, and personalisation.
- It is beneficial in domains where acquiring large datasets is impractical.
One-shot learning is essential to human-like learning efficiency, enabling AI systems to perform well with limited input.
Paperclip Problem
What it means - A thought experiment where an AI is told to make as many paperclips as possible — and ends up destroying the world to do it.
Think of it like - Following a rule so strictly that you forget common sense or the bigger picture.
Reinforcement Learning (RL)
What it means - A way of training AI by giving it rewards or penalties depending on how well it does something.
Think of it like - Training a dog — you give treats for good behaviour and none for bad. Over time, it learns what works.
Reinforcement Learning Human Feedback (RLHF)
What it means - A training method where people tell the AI which answers are better, and the AI learns from that.
Think of it like - A teacher grading essays and telling you how to improve your writing next time.
Small Language Models (SLM)
Small language models (SLMs) are designed to perform specific tasks while using fewer resources than larger models (LLM).
SLMs are constructed with fewer parameters and simpler architectures than large language models (LLMs). This design allows for faster training, less energy consumption, and the ability to be deployed on smaller devices like mobile devices.
However, SLMs have a reduced capacity to handle complex language and exhibit lower accuracy, particularly when used outside of their intended focus.
Synthetic Data
What it means - Fake or made-up data created by AI to train other AI.
Think of it like - Practising conversations by talking to yourself in the mirror — not real, but still useful.
The Bitter Lesson
What it means - A truth in AI that the best results often come when we let machines figure things out themselves, not when we try to help too much.
Think of it like - Realising your child does better when they learn by doing rather than you always telling them how.
Zero-Shot Learning
Zero-shot learning refers to an artificial intelligence system’s ability to correctly identify or handle tasks it has never encountered during its training phase.
The term “zero-shot” emphasizes that the AI requires zero examples or prior experience with a specific task to perform it. This is similar to how humans can understand new concepts based purely on descriptions without needing direct experience.
- Zero-shot learning enables AI models to generalize knowledge to completely new situations
- It differs from few-shot learning, which requires a small number of examples
- Common applications include classification, translation, and task completion
- This capability is particularly valuable in situations where training data is scarce
- Modern frontier models like GPT-4 and Claude demonstrate strong zero-shot abilities
Zero-shot learning represents a significant advance in AI capabilities, moving closer to human-like flexibility in understanding and applying knowledge to new situations.