-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Discrepancy between network behavior in gVisor and runc #10908
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll 8000 occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We're not verifying incoming MAC addresses, but it should be a quick fix to add that. I assume this never came up because IP routing, in most systems, delivers only the packets destined for the container. If I can ask, is there a reason your containers are receiving errant packets? |
Our network stack can change network packets on the host before they reach the container which can cause the container to receive packets with incorrect MAC addresses |
Specifically, we have some custom overlay networking that allows containers to communicate like they're on a private IPV6 subnets between hosts. These containers don't know the MACs of each other, so we realized when working between runc and GVisor that the packets were forwarding fine on GVisor but not runc. |
We weren't verifying that inbound MAC addresses match the NIC, which led to netstack ingesting packets not meant for it. Fixes #10908. PiperOrigin-RevId: 674438500
We weren't verifying that inbound MAC addresses match the NIC, which led to netstack ingesting packets not meant for it. Fixes #10908. PiperOrigin-RevId: 674438500
We weren't verifying that inbound MAC addresses match the NIC, which led to netstack ingesting packets not meant for it. Fixes #10908. PiperOrigin-RevId: 674438500
We weren't verifying that inbound MAC addresses match the NIC, which led to netstack ingesting packets not meant for it. Fixes #10908. PiperOrigin-RevId: 674438500
We weren't verifying that inbound MAC addresses match the NIC, which led to netstack ingesting packets not meant for it. Fixes #10908. PiperOrigin-RevId: 674438500
We weren't verifying that inbound MAC addresses match the NIC, which led to netstack ingesting packets not meant for it. Fixes #10908. PiperOrigin-RevId: 674438500
We weren't verifying that inbound MAC addresses match the NIC, which led to netstack ingesting packets not meant for it. Fixes #10908. PiperOrigin-RevId: 674438500
We weren't verifying that inbound MAC addresses match the NIC, which led to netstack ingesting packets not meant for it. Fixes #10908. PiperOrigin-RevId: 674438500
We weren't verifying that inbound MAC addresses match the NIC, which led to netstack ingesting packets not meant for it. Fixes #10908. PiperOrigin-RevId: 674438500
We weren't verifying that inbound MAC addresses match the NIC, which led to netstack ingesting packets not meant for it. Fixes #10908. PiperOrigin-RevId: 674438500
We weren't verifying that inbound MAC addresses match the NIC, which led to netstack ingesting packets not meant for it. Fixes #10908. PiperOrigin-RevId: 674438500
We weren't verifying that inbound MAC addresses match the NIC, which led to netstack ingesting packets not meant for it. Fixes #10908. PiperOrigin-RevId: 674438500
We weren't verifying that inbound MAC addresses match the NIC, which led to netstack ingesting packets not meant for it. Fixes #10908. PiperOrigin-RevId: 674438500
This should be fixed. I repro'd and tested with your scapy snippet to confirm, and (after learning a bit about what constitutes a broadcast MAC address) this fixes + passes tests. Please re-open if you're still seeing this. |
Thanks @kevinGC! |
Description
Packets that have incorrect MAC destination addresses in their ethernet headers are dropped in runc and gVisor in host network mode. However, in the standard user space networking mode, gVisor does not drop packets with incorrect MAC addresses.
cc: @pawalt, @luiscape
Steps to reproduce
I think you should be able to use
scapy
python library to create packets to replicate this behavior. You should be able to run this on the host:You should be able to run
tcpdump -veni eth0
with the container veth on the host to see that response packets are being sent when the request packets should have been dropped in the container. With runc or host networking, no response packets will be received (ICMP request packets are dropped).runsc version
docker version (if using docker)
No response
uname
Linux ip-10-1-1-1.ec2.internal 5.15.0-209.161.7.2.el9uek.x86_64 #2 SMP Tue Aug 20 10:44:41 PDT 2024 x86_64 x86_64 x86_64 GNU/Linux
kubectl (if using Kubernetes)
No response
repo state (if built from source)
No response
runsc debug logs (if available)
No response
The text was updated successfully, but these errors were encountered: