AMD Finally Makes More Money On GPUs Than CPUs In A Quarter

Pent up demand for MI308 GPUs in China, which AMD has been trying to get a license to sell since early last year, were approved so that $360 million in Instinct GPU sales that were not officially part of the pipeline made their way onto the AMD books in Q4 2025.

AMD Finally Makes More Money On GPUs Than CPUs In A Quarter was written by Timothy Prickett Morgan at The Next Platform.

Dassault And Nvidia Bring Industrial World Models To Physical AI

During his more than two decades with Nvidia, Rev Lebaredian has had a ringside seat to the show that has been the evolution of modern AI, from the introduction of the AlexNet  deep convolutional neural network that made waves by drastically lowering the error rate at the 2012 ImageNet challenge to the introduction of generative AI and now agentic AI, where systems can create AI assistance to help with knowledge work.

Dassault And Nvidia Bring Industrial World Models To Physical AI was written by Jeffrey Burt at The Next Platform.

OMG, After a Decade, VXLAN Is Still Insecure

In 2017 (over eight years ago), I was making fun of the fact that “VXLAN is insecure” was news to some people. Obviously, the message needed to be repeated, as the same author gave a very similar presentation two years later at a security conference.

Unfortunately, it seems that everything old is new again (see also RFC 1925 rules 4 and 11), as proved by a “Using GRE and VXLAN for Fun and Profit” (my summary) presentation at DEFCON 33. Even if you knew that unencrypted tunnels are insecure (duh!) for decades, you might still want to read the summary of the talk (published on APNIC blog) and view the slides.

Calico Ingress Gateway: Key FAQs Before Migrating from NGINX Ingress Controller

What Platform Teams Need to Know Before Moving to Gateway API

We recently sat down with representatives from 42 companies to discuss a pivotal moment in Kubernetes networking: the NGINX Ingress retirement.

With the March 2026 retirement of the NGINX Ingress Controller fast approaching, platform teams are now facing a hard deadline to modernize their ingress strategy. This urgency was reflected in our recent workshop, “Switching from NGINX Ingress Controller to Calico Ingress Gateway” which saw an overwhelming turnout, with engineers representing a cross-section of the industry, from financial services to high-growth tech startups.

During the session, the Tigera team highlighted a hard truth for platform teams: the original Ingress API was designed for a simpler era. Today, teams are struggling to manage production traffic through “annotation sprawl”—a web of brittle, implementation-specific hacks that make multi-tenancy and consistent security an operational nightmare.

The move to the Kubernetes Gateway API isn’t just a mandatory update; it’s a graduation to a role-oriented, expressive networking model. We’ve previously explored this shift in our blogs on Understanding the NGINX Retirement and Why the Ingress NGINX Controller is Dead.

Bridging the Role Gap: Transitioning from the flat, annotation-heavy Ingress model to the role-oriented Continue reading

TACC Explores Mixed Precision And FP64 Emulation For HPC With Horizon

If you want to test out an idea in HPC simulation and modeling and see how it affects a broad array of scientific applications, there is probably not a better place than the Texas Advanced Computing Center at the University of Texas.

TACC Explores Mixed Precision And FP64 Emulation For HPC With Horizon was written by Timothy Prickett Morgan at The Next Platform.

Improve global upload performance with R2 Local Uploads

Today, we are launching Local Uploads for R2 in open beta. With Local Uploads enabled, object data is automatically written to a storage location close to the client first, then asynchronously copied to where the bucket lives. The data is immediately accessible and stays strongly consistent. Uploads get faster, and data feels global.

For many applications, performance needs to be global. Users uploading media content from different regions, for example, or devices sending logs and telemetry from all around the world. But your data has to live somewhere, and that means uploads from far away have to travel the full distance to reach your bucket.

R2 is object storage built on Cloudflare's global network. Out of the box, it automatically caches object data globally for fast reads anywhere — all while retaining strong consistency and zero egress fees. This happens behind the scenes whether you're using the S3 API, Workers Bindings, or plain HTTP. And now with Local Uploads, both reads and writes can be fast from anywhere in the world.

Try it yourself in this demo to see the benefits of Local Uploads.

Ready to try it? Enable Local Uploads in the Cloudflare Dashboard under your bucket's settings, or Continue reading

Interface MAC Address in IOS Layer-2 Images

Here’s another “You can’t make this up, but it sounds too crazy to be true” story: Cisco IOS layer-2 images change the interface MAC address when you change the interface switchport status.

Let me start with a bit of background:

  • IOL Layer 2 image starts with interfaces enabled and in bridged (switchport) mode (details)
  • netlab has to run a normalize script (applicable to IOLL2, IOSv L2, and Arista EOS) before configuring anything else to ensure all interfaces are shut down.
  • The IOLL2 normalize Jinja template had a bug – when setting the interface MAC address, it checked l.mac_address instead of intf.mac_address. Nevertheless, everything worked because the MAC addresses were also set during the initial device configuration.

NB560: Microsoft Doubles Down on Custom AI Chip; CrowdStrike Brandishes Big Bucks for Browser Security

Take a Network Break! We’ve got Red Alerts for HPE Juniper Session Smart Routers and SolarWinds. In this week’s news, Microsoft debuts its second-generation AI inferencing chip, Mplify rolls out a new Carrier Ethernet certification for supporting AI workloads, and AWS upgrades its network firewall to spot GenAI application traffic and filter Web categories. Google... Read more »

S3 is the new network: Rethinking data architecture for the cloud era

For decades, distributed databases have been built around the assumption that storage will live close to compute. The farther data travels over the network, the reasoning goes, the greater the potential for delay. Local RAID (redundant array of independent disks) arrays, network-attached storage (NAS), and cluster file systems keep data close, making it quick and easy to access. But in a distributed system, keeping the entire data store close to compute makes scaling slow, cumbersome, and expensive. Each time a node or cluster is replicated, its associated data must be replicated as well. It isn’t ideal, but until recently, there wasn’t any reasonable alternative. Databases had to scale. Service-level agreements (SLAs) had to be met. Wide-area networks weren’t reliable enough to support high-performance databases at scale. Database designers accordingly spent a great deal of energy solving problems related to coordination, consistency, and replication logic. But imagine things were different. What if they didn’t have to worry about the network, where their data lived, or how to get it from Point A to Point B? How would they design a database then? That’s the intriguing question raised by the advent of cloud object storage services like AWS S3, Google Cloud Continue reading

Fast FRR Container Configuration

After creating the infrastructure that generates the device configuration files within netlab (not in an Ansible playbook), it was time to try to apply it to something else, not just Linux containers. FRR containers were the obvious next target.

netlab uses two different mechanisms to configure FRR containers:

  • Data-plane features are configured with bash scripts using ip commands and friends.
  • Control-plane features are configured with FRR’s vtysh

I wanted to replace both with Linux scripts that could be started with the docker exec command.

Ultra Ethernet: NSCC Destination Flow Control

Figure 6-14 depicts a demonstrative event where Rank 4 receives seven simultaneous flows (1). As these flows are processed by their respective PDCs and handed over to the Semantic Sublayer (2), the High-Bandwidth Memory (HBM) Controller becomes congested. Because HBM must arbitrate multiple fi_write RMA operations requiring concurrent memory bank access and state updates, the incoming packet rate quickly exceeds HBM’s transactional retirement rate. 

This causes internal buffers at the memory interface to fill, creating a local congestion event (3). To prevent buffer overflow, which would lead to dropped packets and expensive RMA retries, the receiver utilizes NSCC to move the queuing "pain" back to the source. This is achieved by using pds.rcv_cwnd_pend parameter of the ACK_CC header (4). The parameter operates on a scale of 0 to 127; while zero is ignored, a value of 127 triggers the maximum possible rate decrement. In this scenario, a value of 64 is utilized, resulting in a 50% penalty relative to the newly acknowledged data.

Rather than directly computing a new transport rate, the mechanism utilizes a three-phase process to define a restricted Congestion Window (CWND). This reduction in CWND inherently forces the source to drain its inflight bucket to Continue reading

Introducing the New Tigera & Calico Brand

Same community. A clearer, more unified look.

Today, we are excited to share a refresh of the Tigera and Calico visual identity!

This update better reflects who we are, who we serve, and where we are headed next.

If you have been part of the Calico community for a while, you know that change at Tigera is always driven by substance, not style alone. Since the early days of Project Calico, our focus has always been clear: Build powerful, scalable networking and security for Kubernetes, and do it in the open with the community.

Built for the Future, With the Community

Tigera was founded by the original Project Calico engineering team and remains deeply committed to maintaining Calico Open Source as the leading standard for container networking and network security.

“Tigera’s story began in 2016 with Project Calico, an open-source container networking and security project. Calico Open Source has since become the most widely adopted solution for containers and Kubernetes. We remain committed to maintaining Calico Open Source as the leading standard, while also delivering advanced capabilities through our commercial editions.”
—Ratan Tipirneni, President & CEO, Tigera

A Visual Evolution

This refresh is an evolution, not a reinvention. You Continue reading

TNO054: AI Skills for CCIEs

Let’s talk about AI for NetOps: It’s not just coming, it’s here. There are tools to use, skills to acquire, and we want to talk about what’s needed for highly certified network engineers to skill up in AI. What certification opportunities or paths exist? What developments do we think we’re going to see here? And... Read more »

Google’s AI advantage: why crawler separation is the only path to a fair Internet

Earlier this week, the UK’s Competition and Markets Authority (CMA) opened its consultation on a package of proposed conduct requirements for Google. The consultation invites comments on the proposed requirements before the CMA imposes any final measures. These new rules aim to address the lack of choice and transparency that publishers (broadly defined as “any party that makes content available on the web”) face over how Google uses search to fuel its generative AI services and features. These are the first consultations on conduct requirements launched under the digital markets competition regime in the UK. 

We welcome the CMA’s recognition that publishers need a fairer deal and believe the proposed rules are a step into the right direction. Publishers should be entitled to have access to tools that enable them to control the inclusion of their content in generative AI services, and AI companies should have a level playing field on which to compete. 

But we believe the CMA has not gone far enough and should do more to safeguard the UK’s creative sector and foster healthy competition in the market for generative and agentic AI. 

CMA designation of Google as having Strategic Market Status 

In January Continue reading

1 2 3 3,841