Regain control of your AI portfolio: visibility, cost governance, and lifecycle management

AI investment is accelerating fast, but for most organizations, the biggest blocker to scaling AI isn’t model capability. The biggest blocker for CDAOs and Heads of Data and AI is the lack of organizing their AI investments like an AI portfolio. Fundamentals of portfolio management like visibility, accountability, cost governance, and product lifecycle management.
In a recent conversation between DataGalaxy’s Chief Product Officer, Nicolas Averseng, and Alexey Belichenko (Global Head of Data, Analytics & AI at Roche), the discussion grounded these challenges in real enterprise constraints—across global teams, multiple business models, and complex data domains. One theme kept coming back: companies want agentic AI, but many still haven’t solved the basics of data trust and governance. Without those basics, AI initiatives struggle to move from pilots to production—and AI investments turn into a mix of disconnected projects, spreadsheet tracking, unclear ownership, and rising run costs.
Below is a practical, technical breakdown of what it takes to move from “AI experimentation” to a scalable, governed AI portfolio.
AI foundations can’t be optional
One of the strongest observations from the discussion was the pace of change: established vendors are rebranding offerings around AI and context, new vendors are emerging quickly, and “agentic” approaches are now everywhere.
But there’s a reality check behind the hype: many organizations still have unsolved foundational problems, especially around:
- data management practices that aren’t connected to business outcomes
- fragmented governance models
- limited context and lineage
- incomplete digitization of knowledge and processes
- unclear accountability for data and AI assets
This creates a maturity gap: leadership pushes for advanced AI automation, while core data readiness and alignment on business outcomes work has not been done.
At Roche, Alexey described the same tension at scale: functions around the world (commercial, marketing, finance operations, etc.) historically developed their own “views” of data and how insights should be produced. Building AI capability in that environment requires standardizing data management practices and re-establishing shared governance—before expecting valuable and measurable outcomes from advanced AI systems.
Key point: AI needs trusted context. And trusted context requires governance basics.
Why most AI portfolios fail before scaling
Watch the full conversation with Alexey Belichenko (Global Head of Data, Analytics & AI at Roche) and DataGalaxy’s CPO, Nicolas Averseng — including the operating model that separate pilots from real, scalable enterprise AI.
Unlock the full webinar replay
Fill in the form to watch the on-demand session.
The AI portfolio visibility gap
In peer discussions, a surprisingly consistent issue shows up: AI value transparency is missing. This is how many organizations end up with what can be called a ghost portfolio: initiatives that are technically tracked somewhere (a backlog, a deck, a spreadsheet), but are practically invisible where decisions are made.
Organizations often can’t answer simple questions such as:
- Do we have a complete list of AI initiatives (and data initiatives) across the company?
- Which initiatives should be prioritized first—and why?
- Which use cases are in production vs. stuck in “pilot limbo”?
- What data products, assets, or platforms do these initiatives depend on?
- Who owns each initiative (business + IT) and who is accountable for outcomes?
In many cases, AI decisions are still made via spreadsheet-based tracking and ad hoc meetings between business and technology leaders. That makes AI use case prioritization inconsistent, makes it hard to stop low-value AI initiatives, and prevents scaling the few use cases that should go into production.
Key point: You can’t scale what you can’t see.
The 4 “ghost portfolio” traps
When visibility is partial, problems accumulate naturally. Four common patterns show up again and again:
- Duplicates: two teams (sometimes two geographies) build the same AI use case without realizing it.
- Zombies: a product is delivered, adoption is low, but it continues consuming maintenance budget.
- Runaways: a pilot quietly expands for months without a clear success metric or a decision to scale or stop.
- Gaps: a high-priority use case never enters a formal intake process, so it never makes it into any backlog.
Use cases vs. data products: two portfolios, one view
A nuanced point from the webinar: a mature organization typically manages two connected portfolios:
- Data products / data assets portfolio What data exists, how it’s governed, how it’s documented, where it’s used, and whether it’s reusable across domains.
- Use cases portfolio The initiatives delivered with the business (analytics, AI, GenAI), including their value, risks, costs, lifecycle stage, and operational ownership.
These portfolios are not the same thing—but you need both to connect business outcomes to the data and AI capabilities enabling them. In Roche Diagnostics commercial organizations, this is especially critical because insight generation spans different go-to-market models across regions (distributor models, key account models)—making data product reuse, alignment, and prioritization harder without a shared portfolio lens.
Connecting AI investment to value
As AI scales, two operational questions become unavoidable: what are we really spending on it, and how do we track the value of it over time? Both are tightly connected — and both reveal whether an organization treats AI as a project or as an ongoing capability.
FinOps meets portfolio governance
As AI grows, cost becomes a board-level topic fast. One reason is structural: AI costs are often distributed across IT budgets, cloud consumption, platform costs, and business P&Ls.
In the discussion, the cost challenge appeared in multiple forms:
- the data and analytics data function can sit inside CIO or Technology budget
- sometimes AI costs live inside each business line P&L
- organizations struggle to justify data/AI as value-generating rather than a cost center
- many teams have a narrative about top-line or bottom-line impact, but can’t prove it consistently
That’s where portfolio governance becomes essential: it helps connect the dots between:
- what you’re building (initiatives and assets)
- what it costs (build + run)
- what value it creates (KPIs, business outcomes)
- what risks it introduces (trust, compliance, liability)
Key point: if you can’t connect AI investment to AI value, AI becomes an uncontrolled spend category.
From project thinking to product lifecycle
A major operational insight: organizations still think in projects, but AI behaves like a product lifecycle.
Many companies:
- launch dashboards/models/use cases
- assume costs drop sharply after delivery
- move on—without a long-term operating plan
In reality:
- costs accumulate over time (“generational build-up” of legacy)
- modernization cycles arrive (ERP/CRM refresh, platform migrations)
- you discover you can’t fund the refresh because the portfolio is bloated with unmanaged run costs
- assets degrade (accuracy drifts, data changes, ownership disappears)
- adoption drops and teams revert to old habits
Lifecycle discipline means being able to launch, evolve, and sunset data or AI initiatives intentionally—based on value frameworks, not inertia.
Key point: if you can’t sunset, you can’t scale sustainably.
Accountability models that actually work
On the accountability topic, the webinar emphasized that organizations aren’t homogeneous—maturity varies across business units. Alexey noted that Roche is not homogeneous here either: the diagnostics organization he works with is historically more integrated from a data standpoint than other parts of the business, which shapes how accountability and operating models can be implemented.
But one model described in detail is a structured approach aligned with data mesh principles:
- Data products sit in functions, governed by domain accountability
- Business process ownership + IT process ownership are established
- Dedicated stewards exist on the business side
- Data product teams operate within subject matter areas (finance, marketing, service, etc.)
- Teams run with fixed capacity + backlog management (Kanban-style prioritization)
- Once an asset is mature and published, it becomes available for self-service BI and reuse
- A DataOps model supports ongoing stability and maintenance, aiming to reduce operational “spikes”
This approach helps keep the portfolio stable even while major technology refresh programs (ERP, CRM, etc.) continue in parallel.
Key point: AI accountability is an operating model problem before it is a tooling problem.
When scale forces the conversation
One of the most useful reminders from Roche’s experience is that portfolio governance becomes non-negotiable when initiatives reach “enterprise scale.” In practice, this can mean hundreds of data and AI initiatives spread across teams and geographies—too many to manage through fragmented tracking and informal memory. At that point, a single portfolio view becomes the only realistic way to align priorities, connect initiatives to business KPIs, and make consistent scale/stop decisions.
When to implement portfolio governance
A common question is whether portfolio governance should wait until “later maturity.”
The webinar perspective was pragmatic:
- If there will be real investment and a plan to scale operations, implement portfolio thinking early as a capability.
- If the scope is only technical modernization with no business buy-in, the portfolio may be used more strategically to create alignment and secure investment.
Key point: portfolio governance is foundational when AI becomes strategic.
Scaling AI: the 3 non-negotiables
Organizations are under pressure to deliver more with fewer resources, with expectations of major efficiency gains from AI. Yet many still lack the basics: portfolio transparency, cost/value accountability, and lifecycle management.
If you want AI that scales—and stays trustworthy—the priorities are clear:
- build end-to-end visibility across AI initiatives and data assets
- connect costs, value, and risk in a single AI portfolio view
- establish accountability and governance through federated ownership and stable DataOps

