8000 New swarm network overlaps with bridge subnet · Issue #50011 · moby/moby · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

New swarm network overlaps with bridge subnet #50011

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2manyvcos opened this issue May 16, 2025 · 1 comment
Open

New swarm network overlaps with bridge subnet #50011

2manyvcos opened this issue May 16, 2025 · 1 comment
Labels
area/networking/d/overlay area/networking/ipam area/networking area/swarm kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. status/0-triage version/28.1

Comments

@2manyvcos
Copy link

Description

We are frequently encountering an issue where a new overlay network created by docker stack deploy overlaps with the bridge subnet even though there still are a lot of unused subnets available in our address pool.

The newly created service refuses to start any tasks with the error invalid pool request: Pool overlaps with other one on this address space.

The newly created network likely causing this issue has the same subnet as the bridge network:

# docker network inspect bridge api-gateway_cache | jq '.[].IPAM.Config'
[
  {
    "Subnet": "10.0.0.0/24",
    "Gateway": "10.0.0.1"
  }
]
[
  {
    "Subnet": "10.0.0.0/24",
    "Gateway": "10.0.0.1"
  }
]

Interestingly, Docker has only allocated networks in 10.0.x.x/24 and 10.90.x.x/24 ranges (except for the ingress network at 10.255.0.0/16), even though our default address pool is 10.0.0.0/8.

Reproduce

docker stack deploy -c docker-compose.yml new-stack

Expected behavior

The newly created network should not overlap with an existing network.

docker version

Client: Docker Engine - Community
 Version:           28.1.1
 API version:       1.49
 Go version:        go1.23.8
 Git commit:        4eba377
 Built:             Fri Apr 18 09:52:18 2025
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          28.1.1
  API version:      1.49 (minimum version 1.24)
  Go version:       go1.23.8
  Git commit:       01f442b
  Built:            Fri Apr 18 09:52:18 2025
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.7.27
  GitCommit:        05044ec0a9a75232cad458027ca83437aae3f4da
 runc:
  Version:          1.2.5
  GitCommit:        v1.2.5-0-g59923ef
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

docker info

Client: Docker Engine - Community
 Version:    28.1.1
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.23.0
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.35.1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 34
  Running: 33
  Paused: 0
  Stopped: 1
 Images: 27
 Server Version: 28.1.1
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: active
  NodeID: 0npgq3tmmaemz67lek2949bdb
  Is Manager: true
  ClusterID: ywen3qki0so81tp15w5u9a0td
  Managers: 4
  Nodes: 13
  Default Address Pool: 10.0.0.0/8  
  SubnetSize: 24
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 2
  Raft:
   Snapshot Interval: 10000
   Number of Old Snapshots to Retain: 0
   Heartbeat Tick: 1
   Election Tick: 10
  Dispatcher:
   Heartbeat Period: 5 seconds
  CA Configuration:
   Expiry Duration: 3 months
   Force Rotate: 0
  Autolock Managers: false
  Root Rotation In Progress: false
  Node Address: 192.168.18.33
  Manager Addresses:
   192.168.18.33:2377
   192.168.18.37:2377
   192.168.18.40:2377
   192.168.18.43:2377
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 05044ec0a9a75232cad458027ca83437aae3f4da
 runc version: v1.2.5-0-g59923ef
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: builtin
 Kernel Version: 5.4.0-214-generic
 Operating System: Ubuntu 20.04.6 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 31.23GiB
 Name: residential-warehouse
 ID: b113fa4b-bccd-4e0c-9f4a-b231b73604fa
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  ::1/128
  127.0.0.0/8
 Live Restore Enabled: false
 Default Address Pools:
   Base: 10.0.0.0/8, Size: 24
   Base: 172.16.0.0/12, Size: 24

WARNING: No swap limit support

Additional Info

OS info:

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.6 LTS"
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

Allocated docker networks across all cluster nodes (collected with docker network inspect $(docker network ls -q) on each host):

[
  "10.0.0.0/24",
  "10.0.1.0/24",
  "10.0.10.0/24",
  "10.0.11.0/24",
  "10.0.12.0/24",
  "10.0.129.0/24",
  "10.0.13.0/24",
  "10.0.130.0/24",
  "10.0.131.0/24",
  "10.0.132.0/24",
  "10.0.14.0/24",
  "10.0.15.0/24",
  "10.0.154.0/24",
  "10.0.155.0/24",
  "10.0.16.0/24",
  "10.0.17.0/24",
  "10.0.170.0/24",
  "10.0.18.0/24",
  "10.0.2.0/24",
  "10.0.20.0/24",
  "10.0.21.0/24",
  "10.0.22.0/24",
  "10.0.23.0/24",
  "10.0.24.0/24",
  "10.0.25.0/24",
  "10.0.26.0/24",
  "10.0.3.0/24",
  "10.0.30.0/24",
  "10.0.31.0/24",
  "10.0.32.0/24",
  "10.0.4.0/24",
  "10.0.5.0/24",
  "10.0.52.0/24",
  "10.0.53.0/24",
  "10.0.55.0/24",
  "10.0.56.0/24",
  "10.0.58.0/24",
  "10.0.6.0/24",
  "10.0.66.0/24",
  "10.0.7.0/24",
  "10.0.8.0/24",
  "10.0.9.0/24",
  "10.255.0.0/16",
  "10.90.0.0/24",
  "10.90.1.0/24",
  "10.90.100.0/24",
  "10.90.14.0/24",
  "10.90.15.0/24",
  "10.90.16.0/24",
  "10.90.2.0/24",
  "10.90.3.0/24",
  "10.90.5.0/24",
  "10.90.8.0/24",
  "10.90.80.0/24",
  "10.90.90.0/24",
  "172.18.0.0/16"
]
@2manyvcos
Copy link
Author
2manyvcos commented May 16, 2025

As a workaround we create a temporary overlay network with the same range as the bridge network:

docker network create --driver overlay --subnet 10.0.0.0/24 tmp-overlap-fix

Then we run docker stack deploy as usual.

Afterwards we remove the temporary network.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking/d/overlay area/networking/ipam area/networking area/swarm kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. status/0-triage version/28.1
Projects
Status: New
Development

No branches or pull requests

2 participants
0