As AI continues to grow, there will be rise in on-premises AI deployments and hybrid cloud models
Companies will increasingly need to find the right balance between hyperscale, enterprise and edge data centres by assessing workload-specific demands. Hyperscale DCs will continue to power large-scale data processing and storage, while enterprise and edge DCs will be essential for data privacy, latency-sensitive, real-time applications.
Organisations will shift from InfiniBand to Ethernet connectivity for AI deployments
Ethernet is an open, innovative and cost-effective technology with a robust ecosystem that benefits from a wider pool of skilled professionals, extensive tools and solutions available from a broad ecosystem. Collective industry innovation has delivered the highest performance and most advanced solutions, including the highest radix switches with 800G and capabilities optimised for AI workloads. As AI adoption moves into the enterprise and tier 2/3 cloud, the shift from InfiniBand to Ethernet will likely accelerate. Further, deployment of multivendor GPUs/accelerators will act as a catalyst for the transition.
New tools and capabilities will emerge to optimise energy consumption and cooling demands, with a shift to nuclear for reliable power
Leveraging AI and automation will be crucial in this shift, allowing data centres to continuously monitor, analyse and adjust energy consumption patterns in real time. Through predictive maintenance, dynamic load balancing and AI-driven cooling systems, data centres can minimise energy waste (and therefore reduce costs) and reduce their overall carbon footprint in practical, measurable ways. Organisations will also need to address the significant cooling demands of AI infrastructure, resulting in the introduction of liquid cooling solutions to provide more efficient alternatives to traditional air-based cooling methods and contribute to overall power savings. Customers will increasingly turn to more power efficient optical modules like linear pluggable optics (LPO) and linear receive optics (LRO) to achieve high performance with reduced energy consumption. Additionally, we are likely to see a shift toward nuclear energy as a clean and reliable power source, as data centres seek sustainable energy options to meet their growing compute demands without compromising environmental goals.
Multivendor automation and AIOps will become indispensable to DC operations
As innovation among GPU, network and storage vendors accelerate, organisations will increasingly implement multivendor AI infrastructure, optimising flexibility, innovation and performance across their systems. In this environment, GPUs will dominate AI and machine learning tasks due to their parallel processing strengths, while CPUs will remain essential for general-purpose workloads. To further enhance operational efficiency, AIOps will play a growing role in predictive and proactive network maintenance, minimising downtime and optimising system health (ultimately having a positive impact on user experiences). With chatbot interfaces emerging as a standard feature in automation and management tools, data centre teams will have streamlined, responsive ways to interact with their systems, driving efficiency and responsiveness to new levels. Together, these advancements will enable data centres to meet the increasing demands of AI workloads with robust, multivendor infrastructures and sophisticated operational intelligence.
Discussion about this post