At Interop Tokyo 2025, Juniper Networks and Spirent Communications took center stage with a powerful joint demonstration that earned them the coveted “Best of Show” award. The demo showcased the industry’s first validated Ultra Ethernet Transport (UET) performance benchmark on Juniper’s cutting-edge QFX5240 Ethernet/IP switch, featuring 64 ports of ultra-high-speed 800G connectivity. This achievement represents a significant leap forward in AI data center networking, aligning closely with the mission of the Ultra Ethernet Consortium (UEC) to deliver scalable, intelligent, and high-performance fabrics purpose-built for AI/ML workloads.

The rise of ultra ethernet for AI data centers
While RDMA over Converged Ethernet version 2 (RoCEv2) and Data Center Quantized Congestion Notification (DCQCN) are widely deployed today for AI training and inference fabrics, these legacy technologies present limitations in terms of scalability, flow control, and dynamic workload handling. The Ultra Ethernet Consortium (UEC) seeks to address these limitations by redefining how Ethernet is used in AI data centers for distributed AI training and inference at scale.
UEC aims to support massive scaling-out to a million end nodes, while providing low-latency synchronization, efficient bandwidth utilization, and native support for collective communication and job coordination. Key architectural enhancements include new transport semantics and advanced congestion signaling, enabling intelligent end-to-end behavior tailored to the demands of modern AI workflows.
The demo setup: high-throughput realism with the QFX5240 and Spirent B3 800G Appliance.
During the Interop Tokyo showcase, Juniper and Spirent demonstrated the first public validation of Ultra Ethernet Transport (UET) using the Juniper QFX5240, a high-performance Ethernet switch built on Broadcom’s Tomahawk 5 (TH5) ASIC. This 64x800G switch was selected for its ultra-high bandwidth density, deterministic latency performance, and deep telemetry capabilities—key requirements for AI workload fabrics.
The primary goals of the demo were threefold:
- Validate UET packet forwarding capabilities using real-time traffic.
- Benchmark key KPIs for AI data center workloads, such as end-to-end latency, packet delivery efficiency, and bandwidth utilization.
- Demonstrate the coexistence of UET and RoCEv2 workloads on a single switching platform.
To achieve this, four Spirent ports were directly connected to the QFX5240, forwarding both RoCEv2 and the brand-new UET-style AI workloads concurrently. This mirrored a realistic deployment scenario where next-gen and existing legacy traffic types must coexist without interference or compromising performance or reliability.

Next-gen packet format: PDS and SES in action
One of the key innovations demonstrated was UEC’s newly proposed packet format, which introduces two additional sublayers beyond standard UDP: the Packet Delivery Sublayer (PDS) and the Semantic Sublayer (SES). These extensions allow enhanced metadata exchange and provide transport-level semantic awareness, which are critical for facilitating optimized collective operations and synchronization in AI training workloads. Positioned after UDP in the transport stack, PDS and SES enable workload-aware routing and coordination across large-scale, distributed AI infrastructures.
Live Wireshark captures from the demo reveal the detailed structure of the new packet format. The PDS layer facilitates robust delivery semantics, while the SES layer enables intent signaling, including collective operation types, priority levels, and workload identifiers. These capabilities empower switches and endpoints to make more intelligent decisions about packet prioritization, buffering, and forwarding in complex AI training pipelines. The demonstration offered a hands-on look at how the new transport protocols will operate in real-world environments and how seamlessly they can integrate into existing Ethernet infrastructures.


Operational readiness: dual workload validation and JobID statistics
The most compelling aspect of the demo was the QFX5240’s ability to forward both RoCEv2 and UET traffic simultaneously without conflict. It handled this dual-workload scenario seamlessly, maintaining consistent performance across its ultra-high-speed 800G ports. This flawless execution of both traffic classes validated the switch’s readiness for heterogeneous AI deployments.
Spirent’s sophisticated emulation environment enables comprehensive workload profiling by capturing detailed job-level statistics, including JobID correlation and latency/error metrics. Advanced instrumentation within Spirent’s UET test suite enables real-time correlation using JobID-based tracking, offering granular insights into flow behavior, latency distribution, and queuing dynamics for both traffic types.
Performance was validated at line rate, with no packet drops, congestion propagation, or forwarding errors—even under stress conditions. These results confirmed that both the hardware and software are production-ready for supporting Ultra Ethernet Transport in next-generation AI environments.

Why it matters: scaling AI with open, high-performance infrastructure
The success of the Juniper-Spirent demo underscores a broader industry shift toward purpose-built Ethernet for AI. As AI workloads continue to scale in complexity and intensity, driven by model complexity and the need for massive GPU clusters, traditional transport layers are being outpaced by the requirements of modern AI fabric orchestration. UET represents a paradigm shift: purpose-built Ethernet for AI, blending Ethernet’s openness with the semantic richness and performance guarantees of specialized AI fabrics, tuned for the needs of model training, large-scale inference, and next-gen distributed applications.
Juniper’s QFX5240, a cornerstone of Juniper’s Mist™ AI-native networking platform, is uniquely positioned to deliver this vision, having been engineered for the future. With built-in support for advanced queue management, standards-based congestion control, and pervasive observability, it enables operators to deploy AI fabrics that are agile, resilient, and scalable.
Future outlook: From demo to deployment
The Interop Tokyo demonstration marks the beginning of what is possible with Ultra Ethernet. As UEC continues to refine the transport protocol and expand its ecosystem, broader industry adoption of Ultra Ethernet is expected in production-scale AI environments across hyperscalers, neocloud providers, and enterprise AI data centers.
Juniper and Spirent remain at the forefront of this evolution, driving innovation and delivering the infrastructure that powers the next frontier of AI. They are pushing the boundaries of what’s possible in AI networking by actively contributing to specification development, interoperability testing, and customer enablement. This award-winning demo is not just a proof– of– concept—it’s a blueprint for scalable, high-performance AI networking.