Enterprise artificial intelligence is growing more complicated, and platform engineering is becoming the control layer that keeps it moving.
As AI moves into production, the challenge is no longer just choosing models — it is getting data, applications, virtual machines, containers and inference workloads to operate across messy hybrid environments. That is where Red Hat Inc.’s open hybrid cloud strategy is becoming more relevant, giving enterprises a way to turn scattered infrastructure into a more consistent operating layer for AI, according to Paul Nashawaty, principal analyst at theCUBE Research.
“Red Hat Summit 2026 reflects the ongoing shift in application development toward platform engineering and AI-enabled workflows,” Nashawaty said. “Enterprises are standardizing on Kubernetes-based platforms to support consistent application delivery across hybrid environments, while integrating AI capabilities into development pipelines. This is increasing alignment between development and infrastructure teams, as applications, data and models are built and operated on shared platforms rather than through isolated toolchains.”
As part of SiliconANGLE Media’s ongoing look at the infrastructure choices shaping enterprise AI, this analysis examines how shared platforms, virtualization and open hybrid cloud control are changing the market. Don’t miss SiliconANGLE’s interviews and analysis from Red Hat Summit 2026, from May 11-13, for more on where Red Hat is taking the market next. (* Disclosure below.)
Platform engineering becomes the control point for AIRed Hat’s AI strategy is built around the idea that enterprises need one foundation for traditional applications, virtual machines, containers and AI workloads. That makes platform engineering a practical requirement rather than a technology trend, because production AI depends on the systems around the model as much as the model itself, explained Rob Strechay, principal analyst at theCUBE Research.
“AI is no longer a science project,” he said. “At Red Hat Summit, the conversation shifts to how organizations operationalize AI efficiently, govern it responsibly and actually deliver return on AI. The real bottleneck in AI isn’t models; it’s the platform. Red Hat’s ‘metal-to-agents’ vision shows how enterprises can run AI, traditional apps and agentic systems on a unified foundation.”
That framing gives Red Hat a clear market lane. Many companies are not struggling because they lack AI ambition; they are struggling because their teams use fragmented tools, disconnected processes and infrastructure that was not designed for inference-heavy, distributed workloads. Kubernetes is becoming the coordination layer for those environments, but enterprises still need platforms that make it usable at scale, Strechay noted.
“Token economics is becoming the new cloud cost model, but it is super confusing to CFOs and line of business leaders,” he said. “The organizations that win will be the ones that optimize inference, not just build bigger models. I expect Red Hat to talk about making all of the infrastructure for this transparent and simple.”
Red Hat’s position in this market is rooted in its history with Linux, Kubernetes and open-source infrastructure. That gives the company a credible foundation for arguing that AI needs a control plane that can span environments, rather than a stack designed around one cloud, one accelerator or one deployment model, according to Brian Stevens, senior vice president and chief technology officer for AI at Red Hat.
“What we realized is that AI is being developed by data scientists, and, as part of that, they’re building their own infrastructure to run it on,” he recently told theCUBE. “If you think about the way companies did digital transformation, they very quickly realized every development team can’t build their own cloud.”
Virtualization turns into modernizationVirtualization gives Red Hat another opening. Many enterprises are reassessing legacy VM environments, but they are not looking for disruption for its own sake. They need a migration path that supports existing applications while giving them a way to modernize infrastructure for containers, AI workloads and cloud-native operations, explained Daniel Messer, senior manager for product management at Red Hat.
“It’s a very powerful concept to reapply that — and also consolidate it on the same platform with less vendors and less attack surface for changes in different learning curves, which I think has been the power of Kubernetes all along,” Messer said. “We think virtualization and containers should not live in siloes. They should be on one platform — and KubeVirt makes that happen.”
This is where Red Hat’s competitive posture becomes clearest. VMware remains deeply embedded in enterprise virtualization, while hyperscalers continue to court modernization workloads. Red Hat’s answer is to pull VMs, containers and AI-based systems into a single platform, making OpenShift Virtualization part of a broader application modernization story rather than a narrow VM replacement.
“But that conversation is not just a virtualization migration conversation,” said Ashesh Badani, senior vice president and chief product officer at Red Hat. “Increasingly, that’s a modernization conversation because you want to make sure the platform that you’ve got is getting those VMs in, making sure that you’re supporting those cloud-native workloads, but increasingly is also making it a platform for AI-based workloads. We firmly believe that the same platform will enable you to do that.”
The broader theme is enterprise platform simplification. Companies are trying to expand AI efforts while reducing tool sprawl, tightening security and improving operational consistency. That gives Red Hat a timely message: fewer disconnected systems, more shared control and a Kubernetes-based model that can support both legacy and next-generation workloads, noted Siamak Sadeghianfar, senior manager for product management at Red Hat.
“We invest a lot in OpenShift Dev Spaces, [which] gives developers a one-click web-based regulated development environment with all the capabilities that they have on their workstation, but also in their IDE, with [the] difference that they don’t have to manage the dependencies or configuration themselves,” he said. “You ask any developers if they change the laptop or change projects, it takes a week to get up to speed. We reduce that to a one-click [process] with Dev Spaces.”
Sovereignty and inference raise the stakesSovereign AI is becoming a board-level concern because enterprises need control over models, data, outcomes and compliance. For Red Hat, that creates another opening for open hybrid cloud infrastructure, especially as governments and regulated industries look for ways to keep AI systems transparent, portable and aligned with local requirements, emphasized Vincent Caldeira, chief technology officer for Asia-Pacific at Red Hat.
“The way we actually define sovereignty is the ability to exert control over your digital destiny,” he told theCUBE. “It’s taking us beyond just the regulatory compliance and security discussions to other dimensions, such as economic competitiveness. Countries that don’t have enough access to GPUs — they can’t have AI factories, meaning they cannot build their own models, their own AI capability.”
Inference adds another layer to the market pressure. Training may still dominate much of the AI conversation, but production AI depends on running models efficiently, securely and close enough to enterprise data to be useful. Red Hat’s work around InstructLab, llm-d and cost-effective models reflects the broader push to make production AI sustainable, according to Roberto Carratalá, principal AI platform architect at Red Hat.
“We already saw that a lot of our customers transitioned from the exploratory phase to now putting things in production, [and] that consumes a lot of different tokens,” he said. “They are skyrocketing on this consumption. You need to be able to have these cost-effective models.”
This is also where the Nvidia ecosystem, accelerator availability and token economics come into focus. Enterprises are discovering that moving AI into production changes the cost profile. It is not enough to build models; organizations need to run them repeatedly, govern them consistently and optimize the infrastructure underneath them.
“Everybody knows if you try to get a GPU these days, it’s almost impossible. In an enterprise environment, it’s an order of magnitude worse,” Messer told theCUBE. “You really need to make sure that these expensive GPUs are not underutilized or not well-managed.”
(* Disclosure: TheCUBE is a paid media partner for Red Hat Summit. Sponsors of theCUBE’s event coverage do not have editorial control over content on theCUBE or SiliconANGLE.)
Image: SiliconANGLE/ChatGPTSupport our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
Comments (0)