our blog

Model Context Protocol: What Happens When AI Starts Talking To AI

It’s early days, but there’s a shift happening in how AI systems exchange information and it could be as transformative as the API economy was for software.

Right now, most large language models (LLMs) work in isolation. They generate answers based on prompts, but lack persistent memory or shared understanding with other systems. If you want different models or agents to work together, you typically need to stitch them with custom logic, manual context passing and a lot of prompt engineering.

That’s where Model Context Protocol (MCP) comes in. Still in its early stages, MCP is starting to define a common structure for how models can pass memory, metadata and context to each other - directly, without a human middle layer. Think of it like an API, but for model to model communication. But instead of endpoints and payloads, it’s about shared understanding and stateful collaboration between AIs.

This matters because, without context, models can’t build on each other’s thinking. They start from scratch with every prompt. But with MCP, you introduce a way to carry forward intent, constraints, even goals which can enable more meaningful multi agent systems. In theory, that could unlock new patterns: agents that collaborate on complex tasks, delegate decisions, or learn continuously from shared experience.

It’s not there yet. MCP is still forming - a concept more than a standard. But like the early days of APIs, there’s a sense that something foundational is emerging. A protocol that could enable AI systems to speak the same language, without needing us to mediate.

It might take time to materialise and the practical use cases aren’t fully known. But if it plays out, the implications are big, not just faster AI development, but entirely new ways of thinking about distributed intelligence.

We’ll be watching closely. Because the moment AIs can truly talk to each other, with memory, intent and shared context - is the moment the paradigm actually shifts.

spread the word, spread the word, spread the word, spread the word,
spread the word, spread the word, spread the word, spread the word,
Illustration showing humans working alongside AI systems with clear handoffs, visibility and control in a digital product environment
AI

Human-in-the-Loop AI: Designing Systems People Can Trust

Illustration showing a gradual transition from AI copilots to autonomous agents with human oversight
AI

Agentic AI Adoption: Moving From Copilots To Agents Without Breaking Trust

Illustration showing agentic AI operating within a digital platform, with clear checkpoints and human oversight ensuring safe and predictable behaviour
AI

Designing Guardrails for Agentic AI in Digital Products

Illustration showing agentic AI operating within a digital product, with humans reviewing key outputs to maintain control and trust
AI

The Real Cost of Agentic AI Done Badly

Diagram showing agentic AI embedded within a digital platform, supporting teams through structured multi-step workflows
AI

Where Agentic AI Works Best Inside Organisations

Human-in-the-Loop AI: Designing Systems People Can Trust

Illustration showing humans working alongside AI systems with clear handoffs, visibility and control in a digital product environment
AI

Human-in-the-Loop AI: Designing Systems People Can Trust

Agentic AI Adoption: Moving From Copilots To Agents Without Breaking Trust

Illustration showing a gradual transition from AI copilots to autonomous agents with human oversight
AI

Agentic AI Adoption: Moving From Copilots To Agents Without Breaking Trust

Designing Guardrails for Agentic AI in Digital Products

Illustration showing agentic AI operating within a digital platform, with clear checkpoints and human oversight ensuring safe and predictable behaviour
AI

Designing Guardrails for Agentic AI in Digital Products

The Real Cost of Agentic AI Done Badly

Illustration showing agentic AI operating within a digital product, with humans reviewing key outputs to maintain control and trust
AI

The Real Cost of Agentic AI Done Badly

Where Agentic AI Works Best Inside Organisations

Diagram showing agentic AI embedded within a digital platform, supporting teams through structured multi-step workflows
AI

Where Agentic AI Works Best Inside Organisations

Human-in-the-Loop AI: Designing Systems People Can Trust

Illustration showing humans working alongside AI systems with clear handoffs, visibility and control in a digital product environment

Agentic AI Adoption: Moving From Copilots To Agents Without Breaking Trust

Illustration showing a gradual transition from AI copilots to autonomous agents with human oversight

Designing Guardrails for Agentic AI in Digital Products

Illustration showing agentic AI operating within a digital platform, with clear checkpoints and human oversight ensuring safe and predictable behaviour

The Real Cost of Agentic AI Done Badly

Illustration showing agentic AI operating within a digital product, with humans reviewing key outputs to maintain control and trust

Where Agentic AI Works Best Inside Organisations

Diagram showing agentic AI embedded within a digital platform, supporting teams through structured multi-step workflows