Cisco Systems dropped a major product announcement Tuesday. The networking giant unveiled its Silicon One G300 switch chip designed for AI data centers.
Cisco Systems, Inc., CSCO
The move puts Cisco in direct competition with Nvidia and Broadcom. All three companies are fighting for market share in the $600 billion AI infrastructure boom.
The Silicon One G300 will help AI chips communicate across hundreds of thousands of network connections. Cisco plans to start selling the product in the second half of 2026.
Taiwan Semiconductor Manufacturing Co will manufacture the chip using 3-nanometer technology. This advanced process gives the chip its performance edge.
Cisco promises the G300 will complete certain AI tasks 28% faster than current solutions. The speed boost comes from intelligent data rerouting capabilities.
Martin Lund, executive vice president of Cisco’s common hardware group, explained the technology. The chip automatically redirects data around network problems within microseconds.
The chip includes “shock absorber” features to prevent network slowdowns. These features kick in when massive data spikes hit the system.
Energy efficiency got a major upgrade too. The G300 improves energy use by around 70% for 100% liquid-cooled systems.
The chip will power new Cisco N9000 and Cisco 8000 systems. These products target AI, hyperscaler, data center, enterprise, and service provider markets.
Networking became a key competitive arena in AI infrastructure. Nvidia included its own networking chip in the six-chip system unveiled last month.
Broadcom jumped into the market with its Tomahawk series. The company targets the same data center customers as Cisco.
Cisco positions Silicon One as the industry’s most scalable and programmable unified networking architecture. The platform covers multiple use cases across different market segments.
The G300 uses Intelligent Collective Networking technology. This system allows AI training and delivery chips to communicate efficiently across vast networks.
Problems happen regularly in networks with hundreds of thousands of connections. The chip’s automatic rerouting prevents these issues from slowing down AI workloads.
Large data traffic spikes can crash or slow networks without proper management. The G300’s shock absorber features address this challenge head-on.
The $600 billion AI infrastructure spending wave drives competition between these tech giants. Companies are racing to build the backbone for next-generation AI systems.
Cisco’s second half 2026 launch puts it slightly behind competitors already serving this market. Nvidia and Broadcom have existing products in customer hands.
Lund emphasized Cisco’s approach differs from competitors. The company prioritizes total network efficiency over raw speed alone.
The G300 handles peak demand periods without breaking stride. This reliability matters for companies running critical AI workloads around the clock.
The post Cisco Systems (CSCO) Stock: New AI Chip Takes Aim at Nvidia and Broadcom appeared first on Blockonomi.


