Regain control of your AI portfolio: visibility, cost governance, and lifecycle management

22 April 2026 │ 14 mins read │ AI by amin mrabet, Tech Team
Regain control of your AI portfolio: visibility, cost governance, and lifecycle management

    AI investment is accelerating fast, and the ecosystem is reshaping itself in real time. But for most organizations, the biggest blocker to scaling AI isn’t model capability—it’s the lack of portfolio fundamentals: visibility, accountability, cost governance, and lifecycle management.

    In a recent webinar conversation between DataGalaxy’s Chief Product Officer Nicolas Averseng and Alexey Belichenko (Global Head of Data, Analytics & AI at Roche), the discussion grounded these challenges in real enterprise constraints—across global teams, multiple business models, and complex data estates. One theme kept coming back: companies want agentic AI, but many still haven’t solved the basics of data management and governance. Without those basics, AI initiatives struggle to move from pilots to production—and portfolios turn into a mix of disconnected projects, spreadsheet tracking, unclear ownership, and rising run costs.

    Below is a practical, technical breakdown of what it takes to move from “AI experimentation” to a scalable, governed AI capability.

    The AI ecosystem is moving fast—your foundations can’t be optional

    One of the strongest observations from the discussion was the pace of change: established vendors are rebranding offerings around AI and context, new vendors are emerging quickly, and “agentic” approaches are now everywhere.

    But there’s a reality check behind the hype: many organizations still have unsolved foundational problems, especially around:

    • data management practices that differ across teams and geographies
    • fragmented governance models
    • limited context and lineage
    • incomplete digitization of knowledge and processes
    • unclear accountability for data and AI assets

    This creates a maturity gap: leadership pushes for advanced AI automation, while core data readiness work is underfunded. In the webinar, one example captured this perfectly: an organization was asked to deliver agentic automation, but the knowledge base wasn’t digitized—meaning the agent would have nothing reliable to consume.

    At Roche, Alexey described the same tension at scale: affiliates and functions around the world (commercial, marketing, finance operations, service organizations) historically developed their own “views” of data and how insights should be produced. Building AI capability in that environment requires standardizing data management practices and re-establishing shared governance—before expecting consistent outcomes from advanced AI systems.

    Key point: AI needs trusted context. And trusted context requires governance basics.

    Why most AI portfolios fail before they even scale — Roche’s Head of Data, Analytics & AI explains

    Watch the full conversation with Alexey Belichenko (Global Head of Data, Analytics & AI at Roche) and DataGalaxy’s CPO Nicolas Averseng — including the operating models, FinOps practices, and lifecycle decisions that separate pilots from real enterprise scale.

    Webinar Replay — DataGalaxy

    Portfolio visibility: most organizations still don’t know what they have

    In peer discussions, a surprisingly consistent issue shows up: portfolio transparency is missing. This is how many organizations end up with what can be called a ghost portfolio: initiatives that are technically tracked somewhere (a backlog, a deck, a spreadsheet), but are practically invisible where decisions are made.

    Organizations often can’t answer simple questions such as:

    • Do we have a complete list of AI initiatives (and data initiatives) across the company?
    • Which initiatives should be prioritized first—and why?
    • Which use cases are in production vs. stuck in “pilot limbo”?
    • What data products, assets, or platforms do these initiatives depend on?
    • Who owns each initiative (business + IT) and who is accountable for outcomes?

    In many cases, decisions are still made via spreadsheet-based tracking and ad hoc meetings between business and technology leaders. That makes prioritization inconsistent, makes it hard to stop low-value initiatives, and prevents scaling the few use cases that should go into production.

    Key point: you can’t scale what you can’t see

    Four “ghost portfolio” failure modes to watch for

    When visibility is partial, problems accumulate naturally. Four common patterns show up again and again:

    • Duplicates: two teams (sometimes two geographies) build the same AI use case without realizing it.
    • Zombies: a product is delivered, adoption is low, but it continues consuming maintenance budget.
    • Runaways: a pilot quietly expands for months without a clear success metric or a decision to scale or stop.
    • Gaps: a high-priority use case never enters a formal intake process, so it never makes it into any backlog.

    Use case portfolio vs. data product portfolio: treat them as different (but connected)

    A nuanced point from the webinar: a mature organization typically manages two connected portfolios:

    1. Data products / data assets portfolio What data exists, how it’s governed, how it’s documented, where it’s used, and whether it’s reusable across domains.
    2. Use cases portfolio The initiatives delivered with the business (analytics, AI, GenAI), including their value, risks, costs, lifecycle stage, and operational ownership.

    These portfolios are not the same thing—but you need both to connect business outcomes to the data and AI capabilities enabling them. In Roche Diagnostics commercial organizations, this is especially critical because insight generation spans different go-to-market models across regions (distributor models, key account models) and a broad diagnostics product portfolio—making reuse, alignment, and prioritization harder without a shared portfolio lens.

    Cost governance and lifecycle management: connecting investment to value

    As AI scales, two operational questions become unavoidable: what are we really spending on it, and how do we manage it over time? Both are tightly connected — and both reveal whether an organization treats AI as a project or as an ongoing capability.

    FinOps + portfolio: the hardest conversation becomes unavoidable

    As AI grows, cost becomes a board-level topic fast. One reason is structural: AI costs are often distributed across IT budgets, cloud consumption, platform costs, and business P&Ls.

    In the discussion, the cost challenge appeared in multiple forms:

    • data and analytics sometimes sit inside CIO portfolios (technology spend)
    • sometimes costs live inside business P&Ls
    • organizations struggle to justify data/AI as value-generating rather than a cost center
    • many teams have a narrative about top-line or bottom-line impact, but can’t prove it consistently

    That’s where portfolio governance becomes essential: it helps connect the dots between:

    • what you’re building (initiatives and assets)
    • what it costs (build + run)
    • what value it creates (KPIs, business outcomes)
    • what risks it introduces (trust, compliance, liability)

    Key point: if you can’t connect investment to value, AI becomes an uncontrolled spend category.

    Lifecycle management: the maturity shift most organizations still haven’t made

    A major operational insight: organizations still think in projects, but AI behaves like a product lifecycle.

    Many companies:

    • launch dashboards/models/use cases
    • assume costs drop sharply after delivery
    • move on—without a long-term operating plan

    In reality:

    • costs accumulate over time (“generational build-up” of legacy)
    • modernization cycles arrive (ERP/CRM refresh, platform migrations)
    • you discover you can’t fund the refresh because the portfolio is bloated with unmanaged run costs
    • assets degrade (accuracy drifts, data changes, ownership disappears)
    • adoption drops and teams revert to old habits

    Lifecycle discipline means being able to launch, evolve, and sunset initiatives intentionally—based on value frameworks, not inertia.

    Key point: if you can’t sunset, you can’t scale sustainably.

    Accountability models that work: federated ownership + DataOps stability

    On the accountability topic, the webinar emphasized that organizations aren’t homogeneous—maturity varies across business units. Alexey noted that Roche is not homogeneous here either: the diagnostics organization he works with is historically more integrated from a data standpoint than other parts of the business, which shapes how accountability and operating models can be implemented.

    But one model described in detail is a structured approach aligned with data mesh principles:

    • Data products sit in functions, governed by domain accountability
    • Business process ownership + IT process ownership are established
    • Dedicated stewards exist on the business side
    • Data product teams operate within subject matter areas (finance, marketing, service, etc.)
    • Teams run with fixed capacity + backlog management (Kanban-style prioritization)
    • Once an asset is mature and published, it becomes available for self-service BI and reuse
    • A DataOps model supports ongoing stability and maintenance, aiming to reduce operational “spikes”

    This approach helps keep the portfolio stable even while major technology refresh programs (ERP, CRM, etc.) continue in parallel.

    Key point: AI accountability is an operating model problem before it is a tooling problem.

    A concrete signal of scale: “hundreds of initiatives”

    One of the most useful reminders from Roche’s experience is that portfolio governance becomes non-negotiable when initiatives reach “enterprise scale.” In practice, this can mean hundreds of data and AI initiatives spread across teams and geographies—too many to manage through fragmented tracking and informal memory. At that point, a single portfolio view becomes the only realistic way to align priorities, connect initiatives to business KPIs, and make consistent scale/stop decisions.

    When should you implement portfolio governance? Early—if you intend to scale

    A common question is whether portfolio governance should wait until “later maturity.”

    The webinar perspective was pragmatic:

    • If there will be real investment and a plan to scale operations, implement portfolio thinking early as a capability.
    • If the scope is only technical modernization with no business buy-in, the portfolio may be used more strategically to create alignment and secure investment.

    Key point: portfolio governance is foundational when AI becomes strategic.

    Conclusion: scaling AI requires visibility, cost control, and lifecycle discipline

    Organizations are under pressure to deliver more with fewer resources, with expectations of major efficiency gains from AI. Yet many still lack the basics: portfolio transparency, cost/value accountability, and lifecycle management.

    If you want AI that scales—and stays trustworthy—the priorities are clear:

    • build end-to-end visibility across initiatives and assets
    • connect costs, value, and risk in a single portfolio view
    • implement lifecycle governance (including sunsetting)
    • establish accountability through federated ownership and stable DataOps