It's January, which means prediction season is upon us. Our inboxes are filling with market outlooks, rental forecasts, and investment theses for the year ahead. Conferences are showcasing economists with confident slides. Everyone, it seems, has a view on what's coming.
And increasingly, these predictions come with an implicit promise: AI will make our forecasts better. More data, more processing power, more sophisticated models - surely this is how we finally crack the prediction problem.
I've spent a lot of time building and teaching machine learning models across physics, engineering, and real estate. I've run forecasting workshops for executives and built predictive systems for research projects. And I need to tell you something that might be uncomfortable: AI doesn't solve the fundamental problem with forecasting. But that's actually fine, because what it does offer is genuinely valuable, just not in the way most people expect.
And increasingly, these predictions come with an implicit promise: AI will make our forecasts better. More data, more processing power, more sophisticated models - surely this is how we finally crack the prediction problem.
I've spent a lot of time building and teaching machine learning models across physics, engineering, and real estate. I've run forecasting workshops for executives and built predictive systems for research projects. And I need to tell you something that might be uncomfortable: AI doesn't solve the fundamental problem with forecasting. But that's actually fine, because what it does offer is genuinely valuable, just not in the way most people expect.
Why forecasting is hard (and AI doesn't change this)
The core issue isn't computational power. It isn't data volume. It's that the future genuinely doesn't exist yet.
This sounds philosophical, but it has practical consequences. Real estate markets, like economies and cities, are what physicists call reflexive systems. Predictions change the thing being predicted. If everyone forecasts that a neighbourhood will gentrify, capital flows in, and the forecast becomes self-fulfilling. Or overshoots and collapses. The act of prediction alters the outcome.
My background is in physics, and one of the most humbling lessons from that field is that some systems are inherently unpredictable. Not because we lack data or computing power, but because tiny variations in initial conditions cascade into wildly different outcomes. More data doesn't fix this. Better algorithms don't fix this. It's a fundamental property of the system itself.
Real estate markets share some of these characteristics. They're influenced by interest rates, policy decisions, demographic shifts, technological changes, and sentiment - all of which interact in complex, non-linear ways. No model, however sophisticated, can predict with certainty how these will unfold. There is always going to be uncertainty and some amount of guessing involved, even in the most elaborate forecast.
This sounds philosophical, but it has practical consequences. Real estate markets, like economies and cities, are what physicists call reflexive systems. Predictions change the thing being predicted. If everyone forecasts that a neighbourhood will gentrify, capital flows in, and the forecast becomes self-fulfilling. Or overshoots and collapses. The act of prediction alters the outcome.
My background is in physics, and one of the most humbling lessons from that field is that some systems are inherently unpredictable. Not because we lack data or computing power, but because tiny variations in initial conditions cascade into wildly different outcomes. More data doesn't fix this. Better algorithms don't fix this. It's a fundamental property of the system itself.
Real estate markets share some of these characteristics. They're influenced by interest rates, policy decisions, demographic shifts, technological changes, and sentiment - all of which interact in complex, non-linear ways. No model, however sophisticated, can predict with certainty how these will unfold. There is always going to be uncertainty and some amount of guessing involved, even in the most elaborate forecast.
What AI actually offers
If AI can't give us certainty, what's the point? Quite a lot, actually, but you need to adjust your expectations.
Faster scenario exploration. AI doesn't give you better point predictions, but it dramatically accelerates your ability to explore 'what if' questions. What if interest rates stay elevated for another two years? What if remote work patterns stabilise at current levels? What if this submarket follows the trajectory of that comparable one? You can test more scenarios in an afternoon than you could manually model in a month.
Better-quantified uncertainty. Good forecasting isn't about being right, it's about knowing how wrong you might be. AI tools can help you understand the range of plausible outcomes, identify which assumptions your forecast is most sensitive to, and communicate uncertainty more honestly. A forecast that says 'rental growth between 2% and 5%, most likely around 3.5%' is more useful than one that confidently states '3.7%'.
Time freed for judgment. The grunt work of forecasting: data cleaning, model fitting, sensitivity analysis, can increasingly be automated. This doesn't replace human judgment; it creates more space for it. You can spend less time wrestling with spreadsheets and more time thinking about what the numbers actually mean for your decisions.
Democratised access to serious methods. This is perhaps the most significant shift. Tools like ChatGPT now allow non-programmers to run forecasting models that previously required coding skills: ARIMA for time series, gradient boosting for complex patterns, regression analysis for understanding drivers. The barrier to entry has collapsed. Someone who's never written a line of code can now engage meaningfully with methods that were once the exclusive domain of quantitative specialists.
A thinking partner for interpretation. AI helps connect the dots in ways that accelerate understanding. 'What does this coefficient mean?' 'Why might this forecast diverge from that one?' 'What assumptions am I implicitly making?' The conversation itself clarifies thinking - having an intelligent interlocutor who can explain statistical concepts in plain language, or challenge your interpretation, has genuine value.
Deeper examination for experts. For those who already know the methods, AI accelerates the tedious parts: checking residuals, comparing model specifications, stress-testing assumptions, running diagnostics. More time for the judgment calls that actually matter. The expert becomes more expert, not obsolete.
The real shift isn't 'AI makes better predictions.' It's 'more people can engage seriously with forecasting, and experts can go deeper.' As in any other aspect, AI is augmenting our work.
Faster scenario exploration. AI doesn't give you better point predictions, but it dramatically accelerates your ability to explore 'what if' questions. What if interest rates stay elevated for another two years? What if remote work patterns stabilise at current levels? What if this submarket follows the trajectory of that comparable one? You can test more scenarios in an afternoon than you could manually model in a month.
Better-quantified uncertainty. Good forecasting isn't about being right, it's about knowing how wrong you might be. AI tools can help you understand the range of plausible outcomes, identify which assumptions your forecast is most sensitive to, and communicate uncertainty more honestly. A forecast that says 'rental growth between 2% and 5%, most likely around 3.5%' is more useful than one that confidently states '3.7%'.
Time freed for judgment. The grunt work of forecasting: data cleaning, model fitting, sensitivity analysis, can increasingly be automated. This doesn't replace human judgment; it creates more space for it. You can spend less time wrestling with spreadsheets and more time thinking about what the numbers actually mean for your decisions.
Democratised access to serious methods. This is perhaps the most significant shift. Tools like ChatGPT now allow non-programmers to run forecasting models that previously required coding skills: ARIMA for time series, gradient boosting for complex patterns, regression analysis for understanding drivers. The barrier to entry has collapsed. Someone who's never written a line of code can now engage meaningfully with methods that were once the exclusive domain of quantitative specialists.
A thinking partner for interpretation. AI helps connect the dots in ways that accelerate understanding. 'What does this coefficient mean?' 'Why might this forecast diverge from that one?' 'What assumptions am I implicitly making?' The conversation itself clarifies thinking - having an intelligent interlocutor who can explain statistical concepts in plain language, or challenge your interpretation, has genuine value.
Deeper examination for experts. For those who already know the methods, AI accelerates the tedious parts: checking residuals, comparing model specifications, stress-testing assumptions, running diagnostics. More time for the judgment calls that actually matter. The expert becomes more expert, not obsolete.
The real shift isn't 'AI makes better predictions.' It's 'more people can engage seriously with forecasting, and experts can go deeper.' As in any other aspect, AI is augmenting our work.
Where forecasting goes wrong
In my experience consulting and teaching, the failures I see aren't usually about model sophistication. They're more mundane. And AI can't fix them.
The inputs are wrong. Garbage in, garbage out—but now faster. AI can process your data with impressive speed, but if the data is incomplete, outdated, or simply wrong, you just get wrong answers more efficiently. I've seen organisations invest heavily in AI forecasting tools while their underlying data sits in inconsistent spreadsheets maintained by people who left the company years ago.
The question is wrong. You can predict the wrong thing with great precision. There are plenty of teams building sophisticated models to forecast metrics that don't actually drive the decisions they need to make. The forecast is technically excellent; it's just not useful.
The forecast gets 'adjusted' into meaninglessness. Perhaps the most common failure mode. Many organisations buy forecasting indices from major consultancies without really understanding what they're based on. And here's the dirty secret of the industry: those indices have often already been through multiple rounds of adjustment. A consultancy like CBRE or JLL might buy forecasts from Oxford Economics, who in turn derive them from central bank projections - with 'adjustments' happening at every stage. By the time the number reaches your desk, it's been massaged by three or four sets of hands, each adding their own assumptions and biases.
Then your team adjusts it again to fit gut instinct or internal politics. The original forecast becomes an elaborate exercise in collective fiction.
This is where AI offers something genuinely different. When you build your own forecast, even a simple one, you know exactly what went into it. You chose the inputs, you understand the assumptions, you can see why the model produced the number it did. A forecast you built yourself, with transparent logic, is much harder to casually 'adjust' than a black-box index you bought from someone else. Ownership creates accountability.
The inputs are wrong. Garbage in, garbage out—but now faster. AI can process your data with impressive speed, but if the data is incomplete, outdated, or simply wrong, you just get wrong answers more efficiently. I've seen organisations invest heavily in AI forecasting tools while their underlying data sits in inconsistent spreadsheets maintained by people who left the company years ago.
The question is wrong. You can predict the wrong thing with great precision. There are plenty of teams building sophisticated models to forecast metrics that don't actually drive the decisions they need to make. The forecast is technically excellent; it's just not useful.
The forecast gets 'adjusted' into meaninglessness. Perhaps the most common failure mode. Many organisations buy forecasting indices from major consultancies without really understanding what they're based on. And here's the dirty secret of the industry: those indices have often already been through multiple rounds of adjustment. A consultancy like CBRE or JLL might buy forecasts from Oxford Economics, who in turn derive them from central bank projections - with 'adjustments' happening at every stage. By the time the number reaches your desk, it's been massaged by three or four sets of hands, each adding their own assumptions and biases.
Then your team adjusts it again to fit gut instinct or internal politics. The original forecast becomes an elaborate exercise in collective fiction.
This is where AI offers something genuinely different. When you build your own forecast, even a simple one, you know exactly what went into it. You chose the inputs, you understand the assumptions, you can see why the model produced the number it did. A forecast you built yourself, with transparent logic, is much harder to casually 'adjust' than a black-box index you bought from someone else. Ownership creates accountability.
So what should you actually do?
Treat forecasts as tools for thinking, not answers. The value of a forecast isn't the number it produces; it's the structured thinking required to produce it. What assumptions are you making? What would have to be true for this outcome to occur? What signals would tell you the forecast is wrong? These questions matter more than the point estimate.
Invest in understanding your assumptions, not just your outputs. The most dangerous forecasts are the ones where nobody remembers what went into them. When markets shift, you need to know which of your assumptions broke. AI can help you document and test assumptions more systematically, but only if you prioritise this.
Build the skill, not just the tool. Access to AI forecasting capabilities is now nearly universal. The differentiator isn't having the tool, it's knowing how to use it well. Understanding what questions to ask, how to interpret outputs critically, when to trust results and when to be sceptical. This is a skill that can be developed, and it's becoming essential.
AI won't save your forecasts. But it can make you a better forecaster - if you approach it with the right expectations. The future remains uncertain. That's not a bug; it's a feature of reality. The goal isn't to eliminate uncertainty but to navigate it more skilfully.
And honestly? That's a more interesting problem than pretending we can predict the unpredictable.
Invest in understanding your assumptions, not just your outputs. The most dangerous forecasts are the ones where nobody remembers what went into them. When markets shift, you need to know which of your assumptions broke. AI can help you document and test assumptions more systematically, but only if you prioritise this.
Build the skill, not just the tool. Access to AI forecasting capabilities is now nearly universal. The differentiator isn't having the tool, it's knowing how to use it well. Understanding what questions to ask, how to interpret outputs critically, when to trust results and when to be sceptical. This is a skill that can be developed, and it's becoming essential.
AI won't save your forecasts. But it can make you a better forecaster - if you approach it with the right expectations. The future remains uncertain. That's not a bug; it's a feature of reality. The goal isn't to eliminate uncertainty but to navigate it more skilfully.
And honestly? That's a more interesting problem than pretending we can predict the unpredictable.

