DATA CENTRE
The impact of AI in Data Centres according to DeepSeek


There is much discussion about AI, so we asked DeepSeek, the much-talked-about competitor to ChatGPT, for their views on the consequences for data centre infrastructure and connectivity. As AI models become more complex and compute-intensive, data centres must evolve their infrastructure to support the increasing need for high-speed connectivity. In this post, we explore the current and future trends in AI bandwidth development based on DeepSeek’s insights and how Aginode is contributing to this transformation.
Current bandwidth landscape: 400G to 800G transition
Today, 400G networks are widely deployed across data centres worldwide. This bandwidth configuration is essential for AI training, with major deployments like NVIDIA’s H100 clusters leveraging 400G/800G optical modules. In China, telecom operators have already completed live 400G network transmission tests, signaling a move toward large-scale commercial adoption.
However, the demand for more efficient and faster AI training is pushing the industry toward 800G networks. Offering transmission speeds of 800Gbps with low latency, 800G optical modules are emerging as the key solution for large-scale AI training. Some telecom operators have already utilized 800G technology for cross-data centre collaborative training, achieving performance levels above 95% of a single cluster’s capacity. By 2025, 800G is expected to replace 400G as the new standard in AI data centres.
The next evolution: 1.6T bandwidth
As AI model sizes continue to expand, the industry is already looking ahead to 1.6T bandwidth solutions. Several factors are accelerating this transition:
- Widening compute-to-bandwidth gap: AI models, such as GPT-4, require massive computing power, and the existing 800G bandwidth may soon become a bottleneck, particularly for unstructured data processing.
- Advancements in switch chips: By 2025, switch chip capacities are expected to reach 102.4T, enabling support for 1.6T optical ports with improved power efficiency and signal integrity.
- Deployment timeline: Initial testing and small-scale applications of 1.6T bandwidth are expected in 2025, with large-scale adoption predicted between 2026 and 2027, particularly for ultra-large AI training clusters and high-performance computing environments.
Beyond 1.6T: the future of AI networking
Looking even further ahead, bandwidth demands are expected to reach 3.2T and beyond. This will require groundbreaking advancements in optical communication technologies, such as Co-Packaged Optics (CPO) and Linear Pluggable Optics (LPO), as well as continued progress in standardization.
Aginode’s role in driving innovation
Aginode is at the forefront of this technological evolution. With a strong focus on research and development, we have actively developed high-performance networking solutions to support AI data centres. In 2024 Aginode introduced 18 new products and executed 82 customized solutions to meet the ever-growing demands of AI-driven infrastructure.
As the AI revolution continues, we remain committed to delivering cutting-edge connectivity solutions that ensure seamless, high-speed data transfer for AI applications. The transition from 400G to 800G, and eventually 1.6T, is just the beginning of a new era in AI networking.
Conclusion
The evolution of AI data centre bandwidth is an inevitable response to the growing demands of computing power, algorithms, and data processing. By 2025, 800G will become the mainstream standard, with 1.6T following shortly after. Looking further into the future, the industry must prepare for even greater advancements to sustain AI’s exponential growth.
Aginode’s commitment to innovation ensures that businesses and AI data centres can stay ahead of the curve, benefiting from seamless transitions to next-generation bandwidth solutions. Stay tuned as we continue to explore the future of AI networking!