The Greatest AI Show On Earth
The NVIDIA GTC conference has a reputation for delivering announcements that reshape industry roadmaps. …
The Greatest AI Show On Earth was written by David Gordon at The Next Platform.
The NVIDIA GTC conference has a reputation for delivering announcements that reshape industry roadmaps. …
The Greatest AI Show On Earth was written by David Gordon at The Next Platform.
This is turning into a “dog bites man” story, but the forecasts for spending in the datacenter for this year keep going up and up, and a few days ago Gartner’s economists and prognosticators finished up their tea and looked at the leaves at the common of a cup through a polished crystal ball and predicted that datacenter spending this year would go up. …
Datacenter Spending Forecast Revised Upwards – Yet Again was written by Timothy Prickett Morgan at The Next Platform.
Like Google and Meta Platforms, Amazon knows exactly how to infuse AI into its business operations such as online retail, transportation, advertising, and even the Amazon Web Services cloud. …
The Twin Engine Strategy That Propels AWS Is Working Well was written by Timothy Prickett Morgan at The Next Platform.
I have unearthed a few old articles typed during my adolescence, between 1996 and 1998. Unremarkable at the time, these pages now compose, three decades later, the chronicle of a vanished era.1
The word “blog” does not exist yet. Wikipedia remains to come. Google has not been born. AltaVista reigns over searches, while already struggling to embrace the nascent immensity of the web2. To meet someone, you had to agree in advance and prepare your route on paper maps. 🗺️
The web is taking off. The CSS specification has just emerged, HTML tables still serve for page layout. Cookies and advertising banners are making their appearance. Pages are adorned with music and videos, forcing browsers to arm themselves with plugins. Netscape Navigator sits on 86% of the territory, but Windows 95 now bundles Internet Explorer to quickly catch up. Facing this offensive, Netscape opensource its browser.
France falls behind. Outside universities, Internet access remains expensive and laborious. Minitel still reigns, offering phone directory, train tickets, remote shopping. This was not yet possible with the Internet: buying a CD online was a pipe dream. Encryption suffers from inappropriate regulation: the DES algorithm is capped at 40 bits and Continue reading

In the previous post, we covered PIM Dense Mode and mentioned that it is not widely used in production because of its flood and prune behaviour. Every router in the network receives the multicast traffic first, and then routers without interested receivers have to send prune messages. This is inefficient, especially in large networks.

In this post, we will look at PIM Sparse Mode, which takes the opposite approach. Instead of flooding traffic everywhere and pruning where it is not needed, Sparse Mode only sends traffic to parts of the network that explicitly request it. Routers with interested receivers send Join messages and only then does the multicast traffic start flowing. This makes Sparse Mode much more efficient and scalable, which is why it is the preferred mode in most production networks today.
In Dense Mode, we saw two main problems. Multicast traffic is flooded everywhere, and every router has to maintain state for every multicast group, even if all its interfaces are pruned. Sparse Mode Continue reading
Welcome to Technology Short Take #190! This is the first Tech Short Take of 2026, and it has been nearly three months (wow!) since the last one. I can’t argue that I fell off the blogging bandwagon over the end of 2025 and early 2026. I won’t get into all the reasons why (if you’re interested then feel free to reach out and I’ll fill you in). Enough about me—let’s get to the technical content! Here’s hoping you find something useful.

The future of network design and architecture is–based on current trends–is going to be working with and around resource constraints. How would resource constraints impact the way we design and manage networks? Mike Bushong joins Tom, Eyvonne, and Russ to ponder network engineering in a resource constrained world.
download
In our previous post, we addressed the most common questions platform teams are asking as they prepare for the retirement of the NGINX Ingress Controller. With the March 2026 deadline fast approaching, this guide provides a hands-on, step-by-step walkthrough for migrating to the Kubernetes Gateway API using Calico Ingress Gateway. You will learn how to translate NGINX annotations into HTTPRoute rules, run both models side by side, and safely cut over live traffic.
The announced retirement of the NGINX Ingress Controller has created a forced migration path for the many teams that relied on it as the industry standard. While the Ingress API is not yet officially deprecated, the Kubernetes SIG Network has designated the Gateway API as its official successor. Legacy Ingress will no longer receive enhancements and exists primarily for backward compatibility.
While the Ingress API served the community for years, it reached a functional ceiling. Calico Ingress Gateway implements the Gateway API to provide:
Here is how we know computing could eventually be a peer to energy, transportation, sustenance, and healthcare as a basic infrastructure need – and will be a bigger part of our lives in the future, if the hyperscalers and cloud builders have their way: The front loading of enormous capital expenses. …
With GenAI Turbochargers, Google Is Shifting Its Cloud Into A Higher Gear was written by Timothy Prickett Morgan at The Next Platform.
Welcome to the 24th edition of Cloudflare’s Quarterly DDoS Threat Report. In this report, Cloudforce One offers a comprehensive analysis of the evolving threat landscape of Distributed Denial of Service (DDoS) attacks based on data from the Cloudflare network. In this edition, we focus on the fourth quarter of 2025, as well as share overall 2025 data.
The fourth quarter of 2025 was characterized by an unprecedented bombardment launched by the Aisuru-Kimwolf botnet, dubbed “The Night Before Christmas" DDoS attack campaign. The campaign targeted Cloudflare customers as well as Cloudflare’s dashboard and infrastructure with hyper-volumetric HTTP DDoS attacks exceeding rates of 200 million requests per second (rps), just weeks after a record-breaking 31.4 Terabits per second (Tbps) attack.
DDoS attacks surged by 121% in 2025, reaching an average of 5,376 attacks automatically mitigated every hour.
In the final quarter of 2025, Hong Kong jumped 12 places, making it the second most DDoS’d place on earth. The United Kingdom also leapt by an astonishing 36 places, making it the sixth most-attacked place.
Infected Android TVs — part of the Aisuru-Kimwolf botnet — bombarded Cloudflare’s network with hyper-volumetric HTTP DDoS attacks, while Telcos emerged as the most-attacked industry.
Receiver Credit-Based Congestion Control (RCCC) is a cornerstone of the Ultra Ethernet transport architecture, specifically designed to eliminate incast congestion. Incast occurs at the last-hop switch when the aggregate data rate from multiple senders exceeds the egress interface capacity of the target’s link. This mismatch leads to rapid buffer exhaustion on the outgoing interface, resulting in packet drops and severe performance degradation.
Figure 8-1 illustrates the operational flow of the RCCC algorithm. In a standard scenario without credit limits, source Rank 0 and Rank 1 might attempt to transmit at their full 100G line rates simultaneously. If the backbone fabric consists of 400G inter-switch links, the core utilization remains a comfortable 50% (200G total traffic). However, because the target host link is only 100G, the last-hop switch (Leaf 1B-1) becomes an immediate bottleneck. The switch is forced to queue packets that cannot be forwarded at the 100G egress rate, eventually triggering incast congestion and buffer overflows.
While "incast" occurs at the egress interface and can resemble head-of-line blocking, it is fundamentally a "fan-in" problem where multiple sources converge on a single receiver. Under RCCC, standard Explicit Congestion Notification (ECN) on the last-hop switch's egress interface is Continue reading
Whenever I claim that the initial use case for MPLS was improved forwarding performance (using the RFC that matches the IETF MPLS BoF slides as supporting evidence), someone inevitably comes up with a source claiming something along these lines:
The idea of speeding up the lookup operation on an IP datagram turned out to have little practical impact.
That might be true1, although I do remember how hard it was for Cisco to build the first IP forwarding hardware in the AGS+ CBUS controller. Switching labels would be much faster (or at least cheaper), but the time it takes to do a forwarding table lookup was never the main consideration. It was all about the aggregate forwarding performance of core devices.
Anyhow, Duty Calls. It’s time for another archeology dig. Unfortunately, most of the primary sources irrecoverably went to /dev/null, and personal memories are never reliable; comments are most welcome.
Pent up demand for MI308 GPUs in China, which AMD has been trying to get a license to sell since early last year, were approved so that $360 million in Instinct GPU sales that were not officially part of the pipeline made their way onto the AMD books in Q4 2025. …
AMD Finally Makes More Money On GPUs Than CPUs In A Quarter was written by Timothy Prickett Morgan at The Next Platform.
During his more than two decades with Nvidia, Rev Lebaredian has had a ringside seat to the show that has been the evolution of modern AI, from the introduction of the AlexNet deep convolutional neural network that made waves by drastically lowering the error rate at the 2012 ImageNet challenge to the introduction of generative AI and now agentic AI, where systems can create AI assistance to help with knowledge work. …
Dassault And Nvidia Bring Industrial World Models To Physical AI was written by Jeffrey Burt at The Next Platform.