I do not like making predictions, especially about the future. That is not modesty; it is training. Academics are usually focused on modelling the past and our forecasts are based on assumptions about the future as well as models. We are very good at models, but not very good at making assumptions.
Still, 2026 has just started and I am willing to put my stake in the ground. If you ask me what this year will be about for AI, my answer is: governance. Not because governance is glamorous (it is very often quite boring to be honest), but because it is now the constraint that separates astonishing capability from serious adoption.
Still, 2026 has just started and I am willing to put my stake in the ground. If you ask me what this year will be about for AI, my answer is: governance. Not because governance is glamorous (it is very often quite boring to be honest), but because it is now the constraint that separates astonishing capability from serious adoption.
Governance
We are at a point where AI can already do a large share of what humans do for a living. Not everything, and not always, but the boundary has moved far enough that the key question in most organisations is no longer, “Can the model do it?” The question is, “Can we trust it to do it in a way that is robust, replicable, auditable, and transparent?” Trust, in a professional setting, is not a feeling. It is a system.
That system needs to answer some very practical questions. If the same inputs are provided tomorrow, will the output be meaningfully consistent? If the output is challenged by a colleague, a client, or a regulator, can you show what happened, what data was used, and what checks were performed? If something goes wrong, can you diagnose the failure mode and fix the workflow rather than simply blaming the model? And if the output is consequential, do you have a verification process that catches the errors you care about most?
The organisations that are moving fastest are already shifting from experimentation to operating discipline. The early phase of the current AI wave was about capability: new models, new interfaces, and the sense that the ceiling kept rising. The later phase has been about making those capabilities usable at scale: guardrails, logging, evaluation, access control, and documentation. In other words, governance.
In real estate, governance will matter for an obvious reason. Our industry is built on decisions made under uncertainty, using data that is incomplete, inconsistent, and often context dependent. AI can help with underwriting, valuation support, lease abstraction, market monitoring, reporting, and research synthesis, but it will only be widely adopted when it becomes defensible. “It looked right” is not a governance framework. Neither is “the model said so.”
That system needs to answer some very practical questions. If the same inputs are provided tomorrow, will the output be meaningfully consistent? If the output is challenged by a colleague, a client, or a regulator, can you show what happened, what data was used, and what checks were performed? If something goes wrong, can you diagnose the failure mode and fix the workflow rather than simply blaming the model? And if the output is consequential, do you have a verification process that catches the errors you care about most?
The organisations that are moving fastest are already shifting from experimentation to operating discipline. The early phase of the current AI wave was about capability: new models, new interfaces, and the sense that the ceiling kept rising. The later phase has been about making those capabilities usable at scale: guardrails, logging, evaluation, access control, and documentation. In other words, governance.
In real estate, governance will matter for an obvious reason. Our industry is built on decisions made under uncertainty, using data that is incomplete, inconsistent, and often context dependent. AI can help with underwriting, valuation support, lease abstraction, market monitoring, reporting, and research synthesis, but it will only be widely adopted when it becomes defensible. “It looked right” is not a governance framework. Neither is “the model said so.”
Vibe work
There is a natural consequence of my first prediction. 2026 will also be the year when ‘vibe work’ becomes normal, even if we do not call it that (and I hope we will not). In software development, strong programmers already ship code that AI produced end to end, sometimes with less line-by-line inspection than you might expect (and sometimes with none at all). The reason this works is not that they have become careless; it is that the workflow has shifted. They are managing systems, tests, and constraints as much as they are writing text. The locus of expertise moves from producing every keystroke to specifying what must be true and verifying that it is true.
Real estate will notice this later than software engineering, but it will arrive. We will increasingly complete tasks by describing outcomes rather than by manually assembling every intermediate step. We will draft materials, models, and analyses at speed, and we will send work forward with confidence not because we personally scrutinised every sentence, but because the workflow makes scrutiny systematic. That sounds counterintuitive until you remember that modern organisations already rely on outputs that no single person fully inspects, whether those are dashboards, spreadsheets, templates, or automated reports. The difference is that AI expands the surface area of what can be automated.
This is precisely why governance is not a bureaucratic afterthought. Vibe work without governance is simply speed-running risk. Vibe work with governance is a competitive advantage, because it lets you move quickly while staying accountable. The firms that win will not be the ones with the flashiest demos. They will be the ones that can trust their AI-enabled workflows enough to put them in front of clients, committees, and regulators.
Real estate will notice this later than software engineering, but it will arrive. We will increasingly complete tasks by describing outcomes rather than by manually assembling every intermediate step. We will draft materials, models, and analyses at speed, and we will send work forward with confidence not because we personally scrutinised every sentence, but because the workflow makes scrutiny systematic. That sounds counterintuitive until you remember that modern organisations already rely on outputs that no single person fully inspects, whether those are dashboards, spreadsheets, templates, or automated reports. The difference is that AI expands the surface area of what can be automated.
This is precisely why governance is not a bureaucratic afterthought. Vibe work without governance is simply speed-running risk. Vibe work with governance is a competitive advantage, because it lets you move quickly while staying accountable. The firms that win will not be the ones with the flashiest demos. They will be the ones that can trust their AI-enabled workflows enough to put them in front of clients, committees, and regulators.
Specialisation
Another consequence of the uprise of governance is about specialisation. The era of one general tool for everything is giving way to a stack of specialised workflows. We will see more teams build internal agents that do one thing extremely well: drafting investment committee materials in a house style, cleaning and reconciling datasets, generating and checking cash flows, monitoring covenants, summarising portfolios, or producing standardised reporting with explicit assumptions. Different tasks will require different approaches, and part of 2026 will be learning which route is best in which context: off-the-shelf models, fine-tuned models, retrieval-based systems, or agentic workflows that execute repeatable steps with checks at each stage.
Customisation will accelerate because everyone will want AI to do exactly their version of the work. AI can already build an Excel financial model, but building your Excel model is a different problem. Your assumptions, your conventions, your sensitivity structure, your audit trail, your sign-off points, and your error tolerances matter. Turning a clever output into a reliable process takes design work, and that design work is increasingly where professional value sits.
Underneath all of this is a quieter driver: education. Not everyone needs to become an AI engineer, but everyone whose work touches AI will need a working model of what it is, what it is not, and how it fails. People who never learn will find their skills becoming less relevant, not because they are bad at their jobs, but because the definition of competence will change. People who do learn will still need to level up, because the frontier is moving and the governance expectations are moving with it.
So, yes, I am reluctant to predict the future. But if you force me, here is the shape of 2026 as I see it. Capability will keep improving. Automation will spread. Specialised workflows will proliferate. And the differentiator will not be access to models. It will be who can govern them well enough to trust them. Dry, perhaps, but scalable. And in the end, scalability is what turns technology into practice.
Customisation will accelerate because everyone will want AI to do exactly their version of the work. AI can already build an Excel financial model, but building your Excel model is a different problem. Your assumptions, your conventions, your sensitivity structure, your audit trail, your sign-off points, and your error tolerances matter. Turning a clever output into a reliable process takes design work, and that design work is increasingly where professional value sits.
Underneath all of this is a quieter driver: education. Not everyone needs to become an AI engineer, but everyone whose work touches AI will need a working model of what it is, what it is not, and how it fails. People who never learn will find their skills becoming less relevant, not because they are bad at their jobs, but because the definition of competence will change. People who do learn will still need to level up, because the frontier is moving and the governance expectations are moving with it.
So, yes, I am reluctant to predict the future. But if you force me, here is the shape of 2026 as I see it. Capability will keep improving. Automation will spread. Specialised workflows will proliferate. And the differentiator will not be access to models. It will be who can govern them well enough to trust them. Dry, perhaps, but scalable. And in the end, scalability is what turns technology into practice.


