Not All AI is Created Equal: Three Questions to Cut Through the Noise
“Where do you want to go today?”
Microsoft’s campaign slogan from the mid 90s. And does anyone remember the MPC badge? The Multimedia PC certification that got slapped on “approved” machines? The mid 90s were the extremes of the multimedia wars. PCs competing to become the best device for watching videos, playing MP3s, consuming all the data you could handle, assuming it was downloadable on a 4kbps dial-up connection in between the 30-minute disconnections.
Everything got the multimedia label. If it had speakers and a CD-ROM drive, it was a Multimedia PC. The badge told you almost nothing about what the machine could actually do, but it sold units.
Fast forward 30 years (and yes, I feel old too) and we’re living through exactly the same thing with AI. If it plugs into a wall, it’ll have some AI strapline somewhere, it seems. Your PC, your phone, your watch, your fridge, your microwave, your oven. Everything is “AI-ready” or “AI-powered.” The label is everywhere, and it tells you almost nothing.
The Problem With a Three-Letter Word
The reason “AI” has become such a useless label isn’t that people are lying. It’s that the term covers such an absurdly wide range of technology that it’s effectively meaningless without qualification. It’s like saying a vehicle has an engine. Okay, is it a lawnmower or a 747?
The mainstream conversation right now is dominated by Large Language Models. The chatbots, the text generators, the things most people picture when they hear “AI” in 2025. But the tech world seems to have collectively forgotten that AI has been around for a lot longer than the past few years. The foundations were laid by Alan Turing and others in the 1940s and 50s. The first artificial neural network, SNARC, appeared in 1951. The first program that could learn from its own mistakes, a checkers-playing system, arrived in 1952.
My own career has taken me across the gamut of computer science and ML projects, and while I’m no academic researcher, I’ve worked alongside some very clever people building very different types of models to solve very different problems. Statistical models for fraud detection. Computer vision systems for scientific instruments. Recommendation engines. Sensor monitoring. All of it “ and none of it an LLM.
So when everything gets called “AI-ready,” the question that should follow is: what kind?
Three Questions That Actually Matter
Instead of accepting the AI label at face value, I’d suggest asking three questions. They won’t make you an expert, but they’ll cut through the marketing fog faster than anything else.
1. Is it learning, or is it following rules?
This is the most basic distinction, and it’s the one most often obscured by marketing. A genuine machine learning model improves its performance based on data. It identifies patterns, adjusts, and gets better over time. A rules engine executes a predefined set of if/else logic that a human wrote.
Both are perfectly valid tools. Sometimes a well-crafted rules engine is exactly what you need. It’s fast, predictable, and easy to audit. There is absolutely nothing wrong with that. But calling it “AI-powered” sets an expectation that the system is doing something it isn’t. If your vendor can’t clearly explain whether their product is learning from data or following a script, that’s a red flag, not because rules engines are bad, but because the distinction matters for how you evaluate, trust, and maintain the thing.
2. Is the output deterministic or probabilistic?
This is the question that I think matters most, and it’s the one that gets lost almost entirely in the current conversation.
A deterministic model, given the same input, will give you the same output every time. You can test it, validate it, explain it, and predict its behaviour. A probabilistic model gives you its best guess, a plausible output that might differ next time you ask the exact same question. If you’ve ever asked ChatGPT the same thing twice and got different answers, you’ve experienced this firsthand.
Neither approach is inherently better. They serve different purposes. But the implications for trust, governance, and risk are completely different.
Consider financial transaction monitoring. If you’re flagging potentially suspicious activity, do you want a model that gives you the same risk score every time for the same transaction? Or one that might score it differently on a different day? Both approaches exist, both have their place. But you’d better know which one you’re deploying and why, because the regulatory, audit, and compliance implications are worlds apart.
The same applies in healthcare, manufacturing, legal tech, anywhere the stakes are real. A deterministic model that classifies a tumour the same way every time is a fundamentally different tool from a probabilistic one that gives you its best interpretation. Both might be called AI. The governance they require couldn’t be more different.
This is also where a lot of the “boring” AI lives. The workhorses that don’t make headlines but run critical infrastructure. Fraud scoring on your credit card. Pricing algorithms when you book a flight. Predictive maintenance on industrial equipment. These models are often deterministic, explainable, and battle-tested over years of data. Nobody writes breathless articles about them, but they’re doing higher-stakes work than most LLM deployments. Nobody dies if a chatbot writes a bad email. A misclassified fraud signal or a missed anomaly in a sensor reading is a different story entirely.
3. Is it making decisions, or informing them?
The final question is about autonomy, and it changes everything about risk, liability, and trust.
A model that flags a suspicious transaction for a human analyst to review is a fundamentally different proposition from one that automatically blocks the transaction. A system that highlights a potential defect on a production line for an inspector to check is not the same as one that rejects the part without human involvement. A tool that drafts an email for you to edit is not the same as one that sends it.
The governance, liability, and customer experience implications shift dramatically based on where the human sits in the loop. An AI system that surfaces information and lets a person decide carries one kind of risk. An autonomous system that acts carries another entirely. Both might be labelled “AI-powered.” The label doesn’t tell you which one you’re buying.
The Label Will Eventually Become Meaningless, and That’s Fine
The multimedia wars resolved themselves. We stopped caring about the MPC badge and started caring about what the machine could actually do. The same will happen with AI. Eventually, the marketing label will fade, and people will just talk about what the technology does, how it works, and whether it’s the right fit for the problem.
But right now, we’re in the messy middle. The label is everywhere and means almost nothing. Every product is “AI-ready.” Every vendor has an AI story. The companies and leaders that will get the most value from this era aren’t the ones buying everything with an AI sticker on it. They’re the ones who understand what kind of AI solves their specific problem and can tell the difference between a lawnmower and a 747.
So the next time someone pitches you an AI-powered product, skip the demos and the buzzwords. Just ask three questions: Is it learning or following rules? Is the output deterministic or probabilistic? Is it making decisions or informing them?
If they can answer clearly, you’re probably talking to someone who knows what they’ve built. If they can’t, well, that tells you something too.
Where do you want to go today? Turns out, the real question was always: what’s actually under the hood?

