About the author: amin mrabet
Tech Team

AI investment is accelerating fast, and the ecosystem is reshaping itself in real time. But for most organizations, the biggest blocker to scaling AI isn’t model capability—it’s the lack of portfolio fundamentals: visibility, accountability, cost governance, and lifecycle management.
In a recent webinar conversation between DataGalaxy’s Chief Product Officer Nicolas Averseng and Alexey Belichenko (Global Head of Data, Analytics & AI at Roche), the discussion grounded these challenges in real enterprise constraints—across global teams, multiple business models, and complex data estates. One theme kept coming back: companies want agentic AI, but many still haven’t solved the basics of data management and governance. Without those basics, AI initiatives struggle to move from pilots to production—and portfolios turn into a mix of disconnected projects, spreadsheet tracking, unclear ownership, and rising run costs.
Below is a practical, technical breakdown of what it takes to move from “AI experimentation” to a scalable, governed AI capability.
One of the strongest observations from the discussion was the pace of change: established vendors are rebranding offerings around AI and context, new vendors are emerging quickly, and “agentic” approaches are now everywhere.
But there’s a reality check behind the hype: many organizations still have unsolved foundational problems, especially around:
This creates a maturity gap: leadership pushes for advanced AI automation, while core data readiness work is underfunded. In the webinar, one example captured this perfectly: an organization was asked to deliver agentic automation, but the knowledge base wasn’t digitized—meaning the agent would have nothing reliable to consume.
At Roche, Alexey described the same tension at scale: affiliates and functions around the world (commercial, marketing, finance operations, service organizations) historically developed their own “views” of data and how insights should be produced. Building AI capability in that environment requires standardizing data management practices and re-establishing shared governance—before expecting consistent outcomes from advanced AI systems.
Key point: AI needs trusted context. And trusted context requires governance basics.
Watch the full conversation with Alexey Belichenko (Global Head of Data, Analytics & AI at Roche) and DataGalaxy’s CPO Nicolas Averseng — including the operating models, FinOps practices, and lifecycle decisions that separate pilots from real enterprise scale.
Fill in the form to watch the on-demand session.
In peer discussions, a surprisingly consistent issue shows up: portfolio transparency is missing. This is how many organizations end up with what can be called a ghost portfolio: initiatives that are technically tracked somewhere (a backlog, a deck, a spreadsheet), but are practically invisible where decisions are made.
Organizations often can’t answer simple questions such as:
In many cases, decisions are still made via spreadsheet-based tracking and ad hoc meetings between business and technology leaders. That makes prioritization inconsistent, makes it hard to stop low-value initiatives, and prevents scaling the few use cases that should go into production.
Key point: you can’t scale what you can’t see
When visibility is partial, problems accumulate naturally. Four common patterns show up again and again:
A nuanced point from the webinar: a mature organization typically manages two connected portfolios:
These portfolios are not the same thing—but you need both to connect business outcomes to the data and AI capabilities enabling them. In Roche Diagnostics commercial organizations, this is especially critical because insight generation spans different go-to-market models across regions (distributor models, key account models) and a broad diagnostics product portfolio—making reuse, alignment, and prioritization harder without a shared portfolio lens.
As AI scales, two operational questions become unavoidable: what are we really spending on it, and how do we manage it over time? Both are tightly connected — and both reveal whether an organization treats AI as a project or as an ongoing capability.
As AI grows, cost becomes a board-level topic fast. One reason is structural: AI costs are often distributed across IT budgets, cloud consumption, platform costs, and business P&Ls.
In the discussion, the cost challenge appeared in multiple forms:
That’s where portfolio governance becomes essential: it helps connect the dots between:
Key point: if you can’t connect investment to value, AI becomes an uncontrolled spend category.
A major operational insight: organizations still think in projects, but AI behaves like a product lifecycle.
Many companies:
In reality:
Lifecycle discipline means being able to launch, evolve, and sunset initiatives intentionally—based on value frameworks, not inertia.
Key point: if you can’t sunset, you can’t scale sustainably.
On the accountability topic, the webinar emphasized that organizations aren’t homogeneous—maturity varies across business units. Alexey noted that Roche is not homogeneous here either: the diagnostics organization he works with is historically more integrated from a data standpoint than other parts of the business, which shapes how accountability and operating models can be implemented.
But one model described in detail is a structured approach aligned with data mesh principles:
This approach helps keep the portfolio stable even while major technology refresh programs (ERP, CRM, etc.) continue in parallel.
Key point: AI accountability is an operating model problem before it is a tooling problem.
One of the most useful reminders from Roche’s experience is that portfolio governance becomes non-negotiable when initiatives reach “enterprise scale.” In practice, this can mean hundreds of data and AI initiatives spread across teams and geographies—too many to manage through fragmented tracking and informal memory. At that point, a single portfolio view becomes the only realistic way to align priorities, connect initiatives to business KPIs, and make consistent scale/stop decisions.
A common question is whether portfolio governance should wait until “later maturity.”
The webinar perspective was pragmatic:
Key point: portfolio governance is foundational when AI becomes strategic.
Organizations are under pressure to deliver more with fewer resources, with expectations of major efficiency gains from AI. Yet many still lack the basics: portfolio transparency, cost/value accountability, and lifecycle management.
If you want AI that scales—and stays trustworthy—the priorities are clear: