Dreamhub’s Yoni Benshaul on the Power of AI-Native Architecture
Why verticalisation, ontology, and validation models matter more than features, and how AI-native systems will redefine the software stack.
AI that doesn’t guess
Yoni Benshaul has emerged as one of the most compelling voices in the shift toward AI-native software. As the founder and CEO of Dreamhub, and the former CEO of CB4, acquired by Gap, he brings a rare combination of machine-learning depth and real-world enterprise experience. And his message is unmistakable: AI doesn’t succeed because it’s powerful, it succeeds because the architecture underneath is built for it. He has seen what becomes possible when AI sits at the heart of product design — a shift he describes as moving toward true AI-native architecture.
Where others still treat AI as an add-on, Yoni is pushing the industry to rethink its foundations. Context, vertical ontology, validation models, these aren’t enhancements. They’re the scaffolding that determines whether a system behaves with intelligence or falls into hallucination and drift.
In this conversation, Yoni unpacks what most companies get wrong, what truly defines an AI-native product, and why the next decade of enterprise software will be shaped by architects, not algorithm chasers.
Rethinking the foundation
You’ve said many companies “build around AI, not with it.” What sparked that realisation?
Yoni Benshaul: “It really comes from my machine-learning background. In a truly AI-native product, you’re not asking, ‘Where do we add AI?’ You’re asking, ‘If AI did most of this workflow, how should the product be designed?’
Traditional systems, especially CRMs, were built for maximum flexibility.
AI-native systems are designed for minimal friction, featuring fully automated data entry, prompt-based interfaces, and workflows where the AI takes the lead, rather than the user filling in endless fields.
But none of that works without context. AI behaves a lot like a person — ask a question with no framing, and the answer won’t make sense. Give context, and the response is meaningful. Models are the same. Context is the difference between intelligence and hallucination.
That’s why verticalisation and ontology are critical. They give the model the structure, language and specificity it needs to genuinely understand the user’s environment rather than guess around it.”
AI-native vs. AI-added
How is AI-native different from embedding AI later?
Yoni: A simple example is qualification in B2B SaaS. In most generic CRMs, every customer builds their flow differently, utilising custom objects, fields, and stages. So, when AI tries to help, it has no idea where the qualification actually lives, or which signals matter. It must guess, and that’s where AI hallucinations happen.
Subscribe to our bi-weekly newsletter
Get the latest trends, insights, and strategies delivered straight to your inbox.
But if your product includes a consistent qualification flow from day one, the model immediately understands where it is in the process, what the user is trying to do, and which criteria are meaningful. The context is built into the architecture itself. That’s the core difference. AI-added tools work around the product. AI-native tools shape the product and the model, from the foundation.
Lessons from CB4 and Gap
What did the CB4 → Gap experience teach you about scaling AI?
Yoni: I was fortunate; Gap had strong innovation leadership, so people were already being pushed to think differently. That made the environment far more receptive than what many founders encounter.
But the biggest learning was cultural. As a founder, you’re used to making 90-degree turns overnight. In a large organisation, you can’t do that. You need alignment, patience, and a deep understanding of each stakeholder’s motivations. That shift is hard for many entrepreneurs, and it took me time to adjust as well.
The other major realisation was the gap between building great AI models and actually moving business outcomes with them. Enterprises hire incredibly smart people who can build sophisticated models, but productisation is an entirely different discipline. Adoption, trust, workflow integration, that’s where most AI initiatives fail. The challenge isn’t the model. It’s turning that model into something people will use and rely on in their day-to-day work.”
Architecture, ethics, and trust
Why does architecture define trust?
Yoni: It comes down to one thing: context. Without the proper context, AI will hallucinate or surface obvious recommendations that add no real value. We saw this in my previous company — we sent AI-based recommendations to store managers, and earning their trust took time. Losing it took just a few irrelevant recommendations. They’re swamped, and once something feels off, they stop opening the app.
That’s why trust is an architectural issue. Architecture is where you decide what context the system captures and how it’s delivered to the model. If that foundation isn’t strong, hallucinations are inevitable, and trust collapses very quickly.
Dreamhub talks about “ethical architecture.” What does that mean?
Yoni: Ethics looks different depending on the domain, but in GTM systems, the core responsibility is privacy. For us, ethical architecture means anonymising learning, understanding patterns and processes without transferring individual identities. The system learns from behaviour, not from personal data. It also means avoiding black-box behaviour. If a field is updated, users can see exactly where it came from — the specific line in an email or a call excerpt.
Nothing is hidden. Transparency isn’t a compliance task for us. It’s a deliberate product decision, and it’s what makes the system trustworthy.
The invisible risks
Among entropy, hallucination drift, and model dependency, which risk is underestimated?
Yoni: Entropy is a much bigger issue than most people realise. Manual data is inherently unreliable. But automated data introduces a different kind of danger; you believe it’s accurate when it isn’t. And that’s actually worse, because decisions are made on the assumption of correctness. On top of that, data becomes stale very quickly. Roles change, contacts change, organisations shift. Keeping data fresh is a constant battle.
The solution is automation paired with validation.
That means:
- models that validate other models,
- surfacing the exact source when something is updated,
- letting users correct fields easily,
- and using enrichment responsibly so the basics never go stale.
Hallucination, again, comes down to context. Without strong context and a well-built ontology, you never fully solve hallucination drift. Context is the foundation that keeps the system grounded.”
Designing for long-term resilience
How does Dreamhub build model resilience?
Yoni: Two principles drive our approach to model resilience: validation and transparency.
First, validation models. They’re essential because they catch drift early, before it becomes systemic. But they’re also the hardest thing to prioritise. Every company, whether a startup or an enterprise, feels resource-constrained, and features usually get all the attention because features sell. Building validation models requires discipline: you deliberately allocate time to something users may never see, but the system absolutely depends on.
Second, transparency. We avoid black-box decisions as much as possible. If the AI automates something or provides a prediction, users should understand why, at least at a high level. Transparency builds trust, and trust drives adoption.
Resilience, to me, means adaptability without chaos. The system should evolve, learn, and improve, but stay anchored in guardrails that prevent drift, hallucination, or opaque behaviour.”
Beyond AI
Outside AI, what technology fascinates you right now?
Yoni: Two areas fascinate me right now. The first is quantum computing. Because so many of our models are fundamentally probabilistic, quantum computing has the potential to change how we think about optimisation and decision-making at a deep architectural level. It shifts us into a different computational paradigm, and that could unlock entirely new possibilities in modelling and prediction.
The second is the intersection of AI and climate technology. We’ve been promised ‘smart buildings’ and ‘smart systems’ for years, but the reality hasn’t matched the promise. With AI and LLMs, that can finally change. We can build systems that truly self-optimise, regulating heat, cooling, water, and even complex industrial processes based on real outcomes rather than static rules. And if you extend that into areas like waste reduction or precision agriculture, the impact becomes even more significant.
I believe AI-driven climate technology will be one of the defining transformations of the next decade.”
The human side
What’s been your favourite moment as a founder?
Yoni: Founders live with doubt every day. Your vision, decisions, and interpretations of the market are constantly challenged and sometimes dismissed outright. So, when a successful exit finally happens, it becomes a rare and meaningful moment of validation.
Beyond that moment, what stayed with me most was seeing the CB4 team remain close years later. People still travel together. That’s when you realise the impact wasn’t just the product — it was the people, the culture, and the relationships that outlasted the company itself.
A lighter note
If someone wrote a book about your life someday, what would its theme be?
Yoni: I’m a big history fan, which gives you humility about your place in the world. I don’t think I’m important enough to write an autobiography. Churchill once said, “History will be kind to me, for I intend to write it,” but I’m not planning to take that advantage.
If someone else wrote about me, the theme would probably be the obsession with changing things that feel sub-optimised. That drive has shaped everything I’ve done.
