In 2007, a lot of companies built their mobile strategy around BlackBerry. Not because BlackBerry was bad. It was excellent. It was the dominant platform, the enterprise standard, the thing every serious business person carried. Building for BlackBerry was the rational decision given everything visible at the time.
Three years later, BlackBerry was irrelevant.
The companies that survived the transition well were the ones that had built for the web rather than for a device. The underlying capability, a browser-based interface accessible from any connected hardware, turned out to be durable in a way that any specific platform was not. The lesson wasn't that BlackBerry was a bad bet. The lesson was that betting on a specific platform is always a worse bet than betting on the underlying architecture.
The same thing is happening right now with AI, and most marketing organizations are making the BlackBerry mistake.
How the mistake happens
It starts reasonably enough. A team decides to get serious about AI. They pick a model — ChatGPT, Claude, Gemini — it doesn't much matter which one, because they're all genuinely capable and the differences between them are real but not decisive. They build workflows around it. Prompt libraries. Internal guides. Integrations with their stack. Over six months they get good at using that specific model's strengths and working around its weaknesses.
Then the model they've built around gets updated and behaves differently. Or a competitor releases something significantly better for their specific use case. Or the pricing changes. Or the organization's needs evolve and the model they chose is the wrong fit for where they're going.
Now they have a choice between staying with a tool that's no longer the best option or rebuilding workflows that have become deeply embedded in how their team operates. Neither is a good option. Both were avoidable.
The organization didn't make a bad decision when they picked their model. They made a bad architectural decision when they built their capability around the model rather than around the task.
What building for agentic actually means
Agentic architecture separates the orchestration layer from the model layer. The orchestration layer is where you define what needs to happen: the goals, the steps, the tools available, the rules for decision-making, the conditions for human review. The model layer is what executes the reasoning within each step. When these are properly separated, swapping the model is a configuration change, not a rebuild.
Think about it in marketing terms. You have a workflow that monitors campaign performance, identifies underperforming segments, generates hypotheses for why they're underperforming, drafts optimization recommendations, and routes them to the right person for review. That workflow has a structure that is independent of which language model is doing the reasoning at each step. The structure is yours. The model is interchangeable.
"Building for agentic means designing the structure first. What is the goal? What data does the system need to reach? What actions can it take? Those questions are architectural. They don't depend on which model you're using."
Organizations that have answered those questions are genuinely model-agnostic. They can run the best available model for each task, swap in something better when it comes along, and run different models for different parts of the same workflow if that's what produces the best result. The capability lives in the architecture. The model is just the engine.
The compounding cost of model lock-in
The model landscape is not stable. The gap between the best available model today and what will exist in 18 months is significant, and the pace of improvement shows no sign of slowing. Organizations that are locked into a specific model are not just missing out on better performance. They're accumulating technical debt every time a better option exists that they can't switch to without rebuilding.
There's also a cost to specialization. When a team gets very good at prompting one specific model, that skill is partially non-transferable. The intuitions they've built about how the model thinks, what it's good at, how to get it to do what they want — these are useful but they don't fully translate. Teams that have built prompting expertise instead of architectural expertise will find themselves in the same position as developers who knew BlackBerry's proprietary APIs inside and out. The expertise is real. The platform is gone.
The deeper cost is strategic. If your AI capability is defined by access to a specific model, your competitive advantage is only as durable as that model's lead. Someone else has the same model. The moment a competitor can access the same capability, your advantage disappears. Architectural capability is harder to replicate because it lives in how your workflows are designed, how your data is connected, how your team operates around the system. That's organizational knowledge. It compounds over time and it doesn't disappear when a new model comes out.
What this means for marketing teams right now
The practical implication is to invest in the layer below the model. That means a few things.
First, document your workflows at the goal and step level before you automate them. What is this workflow trying to accomplish? What information does it need? What does it do with that information? What output does it produce and how do you know if the output is good? If you can answer those questions clearly, you can implement the workflow with any capable model and migrate it when something better comes along.
Second, build your integrations to be model-agnostic. If your marketing data pipeline connects directly to a specific AI provider's API with no abstraction layer, every provider change is a pipeline rebuild. An orchestration layer that sits between your data and your models means the data connections stay stable while the models can change.
Third, develop your team's capability at the architectural level. Understanding how to design an agentic workflow, how to define goals and constraints, how to build in verification steps, how to audit what the system is doing. This is more durable than understanding how to prompt one specific model well. It's also more transferable across the team, because it's about the structure of the work rather than the quirks of a particular tool.
The bet worth making
The bet that ages well is not on a specific model. It's on autonomous systems as the operating architecture for marketing, and on building the organizational capability to design, direct, and improve those systems over time.
Models will keep getting better. The ones available in two years will make today's best models look limited. Organizations that have built on top of a flexible architecture will be able to take advantage of that improvement continuously, each new model making their existing workflows more capable without requiring them to redesign anything fundamental.
Organizations that have built around a specific model will face a recurring choice: stay with something that's no longer the best option, or pay the rebuild cost to migrate. They'll face that choice every 12 to 18 months, indefinitely, because the model landscape isn't going to stabilize.
The BlackBerry companies didn't lose because they made a bad decision in 2007. They lost because they made a platform-specific decision when an architecture-level decision was available. The same choice is available right now. Most marketing teams are making the platform decision without realizing there's another option.
The organizations that will lead in five years aren't the ones that picked the right model in 2025. They're the ones that stopped thinking about models as the capability and started thinking about architecture as the capability. That's a different investment and a different kind of organizational knowledge. It's also a much more durable one.