Imagine you want to deploy a BGP route reflector for MPLS 6PE or L3VPN service. Both services run over MPLS LSPs, use IPv4 BGP sessions, and use IPv4 next hops for BGP routes. There’s absolutely no reason to need IPv6 routing on a node that handles solely the control-plane activity (it never appears as a BGP next hop anywhere), right? Cisco IOS disagrees, as I discovered when running route reflector integration tests for netlab 6PE and (MPLS) L3VPN functionality.
Most platforms failed those tests because we forgot to configure route-reflector-clients in labeled IPv6 and VPNv4/VPNv6 address families1. That was easy to fix, but the IOS-based devices were still failing the tests, with nothing in the toolchain ever complaining about configuration problems.
The cloud-native community is heading to the historic canals and vibrant tech scene of Amsterdam for KubeCon + CloudNativeCon Europe 2026! From March 23–26, Amsterdam will be buzzing with the latest in Kubernetes, platform engineering, and, of course, all things Calico.
Whether you’re a long-time Calico user or just starting your cloud-native security journey, Tigera has a packed schedule to make your KubeCon experience both educational and unforgettable.
Meet Our International TeamOur international team, hailing from Vancouver, Toronto, San Francisco, Cork, London, and Cambridge, is converging on Amsterdam to welcome you! Whether you’re a first-time attendee or a KubeCon veteran, our crew has been through the trenches and is ready to share tips on everything from eBPF security to the best bitterballen in the city.
The biggest shift in the ecosystem this year? Autonomous AI Agents. But as we move these agents into production, how do we ensure they are secure, compliant, and observed?
Join us for our featured workshop: Securing Autonomous AI Agents in Production. We’ll dive deep into how to implement zero-trust security for AI workloads and protect the underlying infrastructure that powers them.
Shane Walsh, Corporate Account Executive (Cork, Continue reading

When it comes to learning and understanding, facts are easy. If I ask you how many bits are in an IPv4 address it’s a single answer. People memorize facts and figures like this all the time. It’s easy to recall them for tests and to prove you understand the material. Where things start getting interesting is when you need to provide context around the answer. Context is expensive.
Questions with one correct answer or with a binary answer choice are easy to deal with cognitively. You memorize the right answer and move on with your life. IPv4 addresses are 32 bits long. The sun rises in the east. You like Star Wars but not Galatica 1980. These things don’t take much effort to recall.
Now, think about why those answers exist. Why does the sun rise in the east? Why are addresses 32 bits long? Why don’t you like Galactica 1980? The answers are much longer now. They involve nuance and understanding of things that are outside of the bounds of simple fact recall. For example, look at this video of Vint Cerf explaining why they decided on 32-bit addresses all the way back in the mid-1970s:
There’s Continue reading
The Receive Interface Group (Rx IFG) is the ingress pre-processing stage that handles the incoming Ethernet bitstream before the packet enters the Packet Processing Array (PPA) of the Receive Network Processing Unit (Rx NPU) in the Cisco Silicon One architecture.
Processing begins at the Rx MAC. The Rx MAC reconstructs (“delimits”) the Ethernet frame from the Physical Coding Sublayer (PCS) bitstream and verifies frame integrity by computing a Frame Check Sequence (FCS) using the CRC-32 algorithm. If the computed FCS does not match the received FCS value, the frame is considered corrupted and is dropped immediately at ingress. If the CRC check succeeds, the frame is admitted for further processing.
After frame validation, the Rx IFG identifies the Ethernet MAC header and detects the presence of IEEE 802.1Q VLAN tags. The Rx IFG performs shallow classification to efficiently manage hardware resources before deeper protocol parsing and forwarding decisions are executed in the Rx NPU. When an IEEE 802.1Q VLAN tag is present, the Rx IFG extracts the Priority Code Point (PCP) bits from the VLAN tag and maps them to an Internal Continue reading
I guess your LinkedIn feed is as full of AI nonsense as mine is, so I usually just skip all that posturing. However, every now and then, I stumble upon an idea that makes sense… until you start to dig deeper into it.
There was this post about AI agents speaking BGP with an associated GitHub repo, so I could go take a look at what it’s all about.
The proof-of-concept (so the post author) has two components:
*This post was updated at 12:35 pm PT to fix a typo in the build time benchmarks.
Last week, one engineer and an AI model rebuilt the most popular front-end framework from scratch. The result, vinext (pronounced "vee-next"), is a drop-in replacement for Next.js, built on Vite, that deploys to Cloudflare Workers with a single command. In early benchmarks, it builds production apps up to 4x faster and produces client bundles up to 57% smaller. And we already have customers running it in production.
The whole thing cost about $1,100 in tokens.
Next.js is the most popular React framework. Millions of developers use it. It powers a huge chunk of the production web, and for good reason. The developer experience is top-notch.
But Next.js has a deployment problem when used in the broader serverless ecosystem. The tooling is entirely bespoke: Next.js has invested heavily in Turbopack but if you want to deploy it to Cloudflare, Netlify, or AWS Lambda, you have to take that build output and reshape it into something the target platform can actually run.
If you’re thinking: “Isn’t that what OpenNext does?”, you are correct. Continue reading
If may seem like you are having flashbacks, but you are not. …
Some More Game Theory, This Time On The AMD-Meta Platforms Deal was written by Timothy Prickett Morgan at The Next Platform.
Following a link in another Martin Fowler’s blog post, I stumbled upon his thoughts on Open Space events – a way to set up self-organizing events.
I’m not sure I’m brave (or young) enough to try it out, but if you’re planning to organize a small gathering (like a local Network Operator Group), this might be an interesting, slightly more structured approach than a Net::Beer event. It would also be nice to know whether someone managed to pull it off in an online format.
While releasing an update to its InferenceX AI inference benchmark test, formerly known as InferenceMax and thus far only having Nvidia and AMD testing systems using it, the analysts SemiAnalysis declared correctly that thus far only Nvidia, AWS, and Google have created rackscale systems that are deployed today, and that AMD was working on it. …
AMD Says “Helios” Racks And MI400 Series GPUs On Track For 2H 2026 was written by Timothy Prickett Morgan at The Next Platform.
For example, the Heatmap above comes from a large high performance compute cluster running a mixture of tasks. Traffic is concentrated along the diagonal, indicating that the job scheduler is packing related tasks in racks so that most traffic is confined to the rack.
Note: Live Dashboards links to a number dashboards showing live traffic, including the Heatmap above.
The next Heatmap shows a very different traffic pattern. In this case, RoCEv2 traffic generated by GPUs performing a NCCL AllReduce/AllGather collective operation using a ring algorithm. During the collective operation, each GPU sends data to its immediate neighbor (modulo the number of GPUs) in a logical ring, resulting in two nearly continuous lines on either size of the diagonal: one for forward traffic, and the other for return traffic associated with each flow. The final example comes from a large data center hosting a mix of front end workloads. Unlike the backend networks, this network combines internal (East/West) Continue readingIt has taken three decades for HPC to move to the cloud, and the truth is that a lot of simulation and modeling applications are still coded to run on CPUs. …
CPU-Only Compute Still Matters To A Lot Of HPC Centers was written by Timothy Prickett Morgan at The Next Platform.
Petr Ankudinov wrote an excellent comment about netlab Fast cEOS Configuration implementation. Paraphrasing the original comment:
If the use case is the initial lab deployment, why don’t you use containerlab startup-config option to change the device’s startup configuration?
I have to admit, I’m too old to boldly go with the just use the startup configuration approach. In ancient times, Cisco IOS did crazy stuff if you rearranged the commands in the startup configuration. But ignoring that historical trivia (Cisco IOS/XE seems to be doing just fine), there are several reasons why I decided to use the startup configurations (and you can use them with some containers) as the last resort:
During Security Week 2025, we launched the industry’s first cloud-native post-quantum Secure Web Gateway (SWG) and Zero Trust solution, a major step towards securing enterprise network traffic sent from end user devices to public and private networks.
But this is only part of the equation. To truly secure the future of enterprise networking, you need a complete Secure Access Service Edge (SASE).
Today, we complete the equation: Cloudflare One is the first SASE platform to support modern standards-compliant post-quantum (PQ) encryption in our Secure Web Gateway, and across Zero Trust and Wide Area Network (WAN) use cases. More specifically, Cloudflare One now offers post-quantum hybrid ML-KEM (Module-Lattice-based Key-Encapsulation Mechanism) across all major on-ramps and off-ramps.
To complete the equation, we added support for post-quantum encryption to our Cloudflare IPsec (our cloud-native WAN-as-a-Service) and Cloudflare One Appliance (our physical or virtual WAN appliance that establish Cloudflare IPsec connections). Cloudflare IPsec uses the IPsec protocol to establish encrypted tunnels from a customer’s network to Cloudflare’s global network, while IP Anycast is used to automatically route that tunnel to the nearest Cloudflare data center. Cloudflare IPsec simplifies configuration and provides high availability; if a specific data center becomes unavailable, traffic Continue reading
On February 20, 2026, at 17:48 UTC, Cloudflare experienced a service outage when a subset of customers who use Cloudflare’s Bring Your Own IP (BYOIP) service saw their routes to the Internet withdrawn via Border Gateway Protocol (BGP).
The issue was not caused, directly or indirectly, by a cyberattack or malicious activity of any kind. This issue was caused by a change that Cloudflare made to how our network manages IP addresses onboarded through the BYOIP pipeline. This change caused Cloudflare to unintentionally withdraw customer prefixes.
For some BYOIP customers, this resulted in their services and applications being unreachable from the Internet, causing timeouts and failures to connect across their Cloudflare deployments that used BYOIP. The website for Cloudflare’s recursive DNS resolver (1.1.1.1) saw 403 errors as well. The total duration of the incident was 6 hours and 7 minutes with most of that time spent restoring prefix configurations to their state prior to the change.
Cloudflare engineers reverted the change and prefixes stopped being withdrawn when we began to observe failures. However, before engineers were able to revert the change, ~1,100 BYOIP prefixes were withdrawn from the Cloudflare network. Some customers were able to restore their Continue reading
AS-SETs (not that kind) were originally designed to simplify filtering at eBGP peering points–but they seem to have gone horribly wrong. Job Snijders and Doug Madory join Tom and Russ to discuss the history, use, problems, and (hopeful) demise of AS-SETs.
download