Aria Networks Secures $125M to Launch AI-Optimized Networking Platform
Aria Networks, a startup founded in January 2025 by Mansour Karam, has made waves in the tech industry by securing $125 million in funding to unveil its groundbreaking Deep Networking platform aimed at optimizing networks for artificial intelligence (AI) applications.
Building on Karam's previous experience with intent-based networking vendor Apstra, which was acquired by Juniper Networks in 2020, Aria Networks is introducing a path-centric approach that leverages microsecond telemetry. This differs significantly from the switch-centric models employed by traditional vendors.
The Deep Networking platform integrates specialized switching hardware, a hardened version of the SONiC operating system, fine-grained telemetry collected from switches, transceivers, and network interface cards (NICs), alongside intelligent agents that function at every layer of the network stack.
How Deep Networking Works
The Deep Networking platform operates by treating the network as an active participant in AI cluster performance rather than merely a passive infrastructure. Fine-grained telemetry collected at the ASIC level, coupled with intelligent agents across the stack and continuous cloud-delivered software updates, are central to this approach.
Aria Networks claims its primary technical differentiation lies within the telemetry layer. Traditional monitoring tools, such as NetFlow, tend to gather data after events occur, often at a coarse resolution. In contrast, Aria's system collects telemetry in real-time with microsecond granularity directly from the switching ASIC.
Karam explained, "We have embedded code sitting right inside the ASIC, right on the ARM processors in the ASIC, that is extracting telemetry." This real-time data enables adaptive tuning of dynamic load balancing parameters, data center quantized congestion notifications (DCQCN), and failover logic, all without waiting for manual interventions or threshold breaches.
Agents operate at various levels within the platform, reacting almost instantaneously to link-level events and making strategic decisions about traffic flow placement across the cluster. At the cloud layer, a language model-based agent provides operators with correlated insights in natural language, allowing them to ask questions regarding specific jobs or alert conditions, thereby receiving context-aware responses.
Karam emphasized that simply adding a language model to an existing architecture is insufficient. He stated, "If you ask it to do anything, it could hallucinate and bring down the network," stressing the necessity of context and data for safe operations.
Focus on MFU and Token Efficiency
While traditional networking metrics often revolve around bandwidth and latency, Aria Networks is emphasizing two specific metrics: Model FLOPS Utilization (MFU) and token efficiency. MFU measures the ratio of achieved floating-point operations per second (FLOPS) per accelerator against theoretical peaks, while token efficiency can be measured in terms of tokens consumed per dollar or tokens produced per unit of time.
Karam indicated that the network significantly impacts both MFU and token efficiency due to its involvement with every component in a cluster. For instance, a problematic NIC in a large cluster can reduce MFU by as much as 1.7% during a specific operation. Similarly, an ineffective transceiver may trigger continuous traffic rerouting, impacting both MFU and expenditure.
Aria's modeling suggests that even a modest 3% improvement in MFU across a cluster could enhance annual revenue by approximately $49.8 million, equating to a 7.9% revenue increase based on prevailing token pricing.
Switch Portfolio
The hardware lineup from Aria Networks features a series of switches built on Broadcom ASICs and runs a robust, standards-based SONiC implementation. This includes:
- Aria Switch 800G: Employing the 51.2T Broadcom Tomahawk 5 ASIC, featuring 64 x 800G OSFP ports.
- Aria Switch 1.6T High Radix: This air-cooled unit utilizes the 102.4T TH6 ASIC and has 128 x 800G OSFP ports.
- Aria Switch 1.6T: A compact 2RU unit supporting air and full liquid cooling with 64 x 1.6T OSFP ports.
Future Prospects with Forward Deployed Engineers
Aria Networks has implemented a unique approach by embedding forward deployed engineers (FDEs) with clients from the deployment stage onwards. Karam stated that this model differs significantly from traditional professional services, with FDEs ensuring that customer feedback directly informs product development.
"Everything the forward deployed engineers do ultimately gets engineered back into the products," Karam explained, highlighting the alignment of FDEs with product direction. This iterative feedback loop is aimed at accelerating product enhancements and ensuring the network remains operational and effective.
Karam concluded with an emphasis on continuous improvement, stating, "Bringing in all that intelligence so that we can increase the breadth of the solution, while keeping it safe to use — that’s going to be a big, continued area of investment."
Source: Network World News