I’ve been lucky to be part of lots of great discussions recently*, all pointing towards the same things: the rise of interoperability.
(* Events like:
- Connected Places Catapult Summit 2026’s panels on data, SMEs and innovation
- Chats with gov.uk design teams about their work on user data
- Chats with an AI startup)
At their core they’ve all been about ways to increase the interoperability of systems – the ways that they connect, communicate and exchange information seamlessly.
In short, it’s all about connecting, not recollecting data.
Previously there was a push on growing huge pools of data. It’s something we’ve heard Outlandish clients talk about before: the need to grow vast, amorphous “data lakes”, pumped full of every piece of data they can collect on a user’s interactions with the business, service or external services.
But this often means organisations recollecting data that others have already collected.
We need to get away from recollecting data to focus on connecting different systems of data held and maintained by different parties. In other words, we need to get towards interoperability: the ability of different systems, devices, and organisations themselves to connect and exchange information with minimum friction. (And in a secure, GDPR compliant and non-extractive way, natch).
The rise of infratech
A new sector is emerging around this interoperability of organisations’ digital infrastructure: “infratech”. Namechecked at last week’s Connected Places Catapult, it’s seen as key to reducing friction, duplicated effort, and helping SMEs to scale.
It’s being supported by some major £££ too. As part of the UK government’s 10 year Modern Industrial Strategy, they will be investing up to £12 million in “UK Data Sharing Infrastructure Initiatives from April 2026.”
Meanwhile the EU’s Common European Data Spaces will make more data available for access and reuse in a number of fields: health, agriculture, manufacturing, energy, mobility, finance, public administration, skills and the EU’s Green Deal.
Tapping into the National Data Library
And back in the UK once more, there’s the £100million National Data Library, a flagship government project designed to unlock the value of public-sector data. It aims to create a secure, centralised infrastructure to make high-value data accessible for research and public services while upholding privacy and ethical standards.

Achieving everyday interoperability
At least, that’s the big noise about interoperability. But at Outlandish we’re already seeing the principles behind interoperability emerge on a smaller, more tangible scale.
In our conversations with the Government Digital Service team (the folk behind gov.uk, who recently came into the office for one of our Consent-Based Decision Making training sessions) we learnt how they’re connecting and working with different datasets in an interoperable way, instead of pooling things into one massive data lake.
The challenge they recounted is that various gov.uk websites all track visitor behaviour in different ways. Different sites such as vehicletax.service.gov.uk or universal-credit.service.gov.uk all use different terminology, or have different definitions for the same things. For instance, what counts as a “page view” might be deemed different on their DVLA website, their a tax website, etc. Or page views might be named something unique on each.
The challenge then is how to give their teams a complete and accurate picture of user behaviour across all the different properties in the gov.uk ecosystem.
The solution has been to build a meta-layer on top of each site’s data which defines and describes it, allowing AI tools to work with it. Now, if tasked with something like “show me the data on all the page views across all the gov.uk sites”, an AI tool can work to retrieve data that meets the intention of the team member – and it handles the data correctly, no matter how it is defined in each website.
MCP Servers – connecting your organisation’s data and resources to AI
Finally, I’ve been learning this month about MCP servers, which help connect AI assistants / LLMs to the systems where data lives. They are all about interoperability. Or rather, they are AI-operable infrastructure.
At its core, an MCP server is a standard way to give AI apps (in particular those with a chat interface) access to your organisation’s tools, data, and the workflows to do stuff, but without building a one-off integration for every single thing you want to connect to.
Another way of framing it is that it’s like giving an AI tool a playbook of things it can do, plus the tools it needs to do them. It gives the AI:
- Things it can “see”, like assets and other resources from your organisation’s system. For instance:
- Documents
- Databases
- CRM records
- Files
- It can also give the AI the playbook of things it can “do”. Like:
- Create a record
- Update a field
- Send a message
- Run a query
- Trigger something else
- And finally it gives it the ways of working. I.e. the prompts/workflows that it might respond to. Such as:
- “Search the database”
- “Generate a report”
- “Prepare a draft email”
It’s worth noting too that an MCP server can either be local, so it runs on your laptop from where it can access your files and it can remain private. Or it can live on a shared space – perhaps a shared server used by your team. This gives it the ability to connect to shared tools and help operate the organisation.
Using MCP servers in a real world environment
It’s not hard to see a future where, for instance, MCP servers could help community energy groups using Nook (our case management tool). By giving it access to the tool and calendar tools of the team, it could allow a Retrofit Programme Coordinator to write a prompt such as “Create a list of all of our clients who’ve been waiting more than 2 weeks for a home home visit to discuss retrofit. Schedule a tentative visit in a Retrofit Assessor’s calendar for each, max one per day”.
Plus if you think bigger, what might a future look like where our community energy clients can also use MCP servers to draw on the housing data in the National Data Library? Or the energy data?

Interoperability, sensibly
Of course all of these things should be sensibly implemented. I don’t think we should be connecting systems blindly, and certainly not empowering AI tooling to make life-changing actions. When it comes to MCP servers, it means restricting the tools shared and the playbook (the series of actions the AI tool can take) to only those which are safe.
Plus, of course, we shouldn’t be sharing data out of the MCP server back to the AI tools for their companies’ data harvesting and larger model training. We should be careful of data colonialism and extractive processes here, and we’re all able to push back and seek another way. This is certainly the approach taken by the likes of the gov.uk teams, where they use AI tooling (via lots of the larger commercial players, the OpenAI’s and the Anthropic’s) to perform actions but restricting what is shared back to these companies.
It’s certainly an interesting time. Sensible levels of interoperability with (limitations on integrations and workflows) gives a lot of potential for organisations to address the amount of administrative busywork.
Lead photo by D. Lamar Hanri on Unsplash
Mid-content photo by Shubham Dhage on Unsplash
Final photo by Anton Savinov on Unsplash