-
Notifications
You must be signed in to change notification settings - Fork 18.8k
Support swarm-mode services with node-local networks #32981
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@aboch build is failing;
|
If I understand the design correctly. In order to attach containers from swarm service to default
Assuming that |
@rreinurm that is correct if the original network is created with a @aboch Can you pls update the PR title from macvlan -> node-local ? |
@aboch can you pls give more details on why we need In addition to these |
I updated the description, to make it clear that the The fact that the network driver is multihost capable is an internal detail between the network driver and libnetwork core and it is handled there transparently to the user. @rreinurm The reason why you would not be able to run a service on the default If you do not have any specific nw configurations for the bridge network (read you do not need the |
cli/command/network/create.go
Outdated
@@ -63,6 +66,12 @@ func newCreateCommand(dockerCli *command.DockerCli) *cobra.Command { | |||
flags.SetAnnotation("attachable", "version", []string{"1.25"}) | |||
flags.BoolVar(&opts.ingress, "ingress", false, "Create swarm routing-mesh network") | |||
flags.SetAnnotation("ingress", "version", []string{"1.29"}) | |||
flags.BoolVar(&opts.swarm, "swarm", false, "Create a swarm network") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I saw this as a user, I would think I had to specify this flag when creating an overlay network I intended to use with Swarm, but I assume that's not actually the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we are still debating internally about the proper flag.
We are thinking of --scope=[local|swarm]
where if not specified, the default network driver scope is used. So the flag would be to promote to swarm
scope a network which default driver socpe is local
. It seems like there would also be future use-cases for the reverse use case, the downgrade.
cli/command/network/create.go
Outdated
@@ -63,6 +66,12 @@ func newCreateCommand(dockerCli *command.DockerCli) *cobra.Command { | |||
flags.SetAnnotation("attachable", "version", []string{"1.25"}) | |||
flags.BoolVar(&opts.ingress, "ingress", false, "Create swarm routing-mesh network") | |||
flags.SetAnnotation("ingress", "version", []string{"1.29"}) | |||
flags.BoolVar(&opts.swarm, "swarm", false, "Create a swarm network") | |||
flags.SetAnnotation("swarm", "version", []string{"1.29"}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like the current API version is 1.30.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks
if n.ingress || n.internal || n.attachable { | ||
return types.ForbiddenErrorf("configuration network can only contain network " + | ||
"specific fields. Network operator fields like " + | ||
"[ ingress | internal | attachable] are not supported.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a space between attachable
and ]
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
return fmt.Errorf("failed to validate network configuration: %v", err) | ||
} | ||
if len(driverOptions) > 0 { | ||
return types.ForbiddenErrorf("network driver options are not supported if the network depends on a configuration network ") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trailing space
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
} | ||
if err := json.Unmarshal(ba, &driverOptions); err != nil { | ||
return fmt.Errorf("failed to validate network configuration: %v", err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trying to understand this. We marshal the data to JSON, and then unmarshal it back? I guess the idea is to convert map[string]interface{}
to map[string]string
, but I think a simple loop over the map would be a more straightforward way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that opts may be of type map[string]interface{}
or map[string]string
. Because we initially used the options.Generic
for the driver options, the go json conversion of these types has been a continuous headache.
The Marshal & Unmarshal pattern has proven to be the simplest way to go around the issue. You can see it used in several places in libentwork.
// ConfigFrom indicates that the network specific configuration | ||
// for this network will be provided via another network, locally | ||
// on the node where this network is being plumbed. | ||
string config_from = 8; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @ijc25
@aboch I tried the PR and the functionalities are working as advertised. Few overall comments -
|
@aboch Does this work with ipvlan as well? The following was tested against a CentOS 7 build of aboch/docker@be38dba94df1e05d7381c806aa9cc03cf5216368 -
Thank you for working on this integration. If merged, it will help us adopt Swarm, as we are using ipvlan l2 currently. Also, can global-scoped IPAM be used with this, or is that only applicable to overlay for now? I've confirmed the locally-scoped IPAM works fine via macvlan, which is still a pretty big win - docker network create --config-only --subnet=10.0.0.0/24 --gateway=10.0.0.1 --ip-range=10.0.0.64/26 -o parent=ens192 test_macvlan_bridge # node 1 |
Thanks @eyz, yes I did not add it because it is an experimental driver. I will add the support for it. so that user who is fine to run the daemon with the experimental flag can make use of it. |
@aboch I understand this is still in design review. But it will help if you can open the PRs against libnetwork and swarmkit individually so that it makes it easier to review the components while the moby side glue can be reviewed in this PR. From design standpoint, am glad that this PR addresses the overall theme. Just a few additional changes will make it even better.
|
@@ -224,7 +224,7 @@ func (c *controller) agentSetup() error { | |||
logrus.Errorf("Error in agentInit : %v", err) | |||
} else { | |||
c.drvRegistry.WalkDrivers(func(name string, driver driverapi.Driver, capability driverapi.Capability) bool { | |||
if capability.DataScope == datastore.GlobalScope { | |||
if capability.Multihost { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this going to be Swarm scope?
Looking after looks like that config wise we say to create a network with scope swarm, but the driver capability is multihost. Is my understanding correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. This is a network driver specific capability, stating whether the networks created by this driver can provide multihost connectivity.
For example, this is the case for macvlan, ipvlan, overlay.
Based on this capability, and the user intention to deploy a swarm network using this driver, libnetwork core will join the cluster wide network DB for this network.
This is already taken care
The requirement this PR addresses does did not include running services on |
@@ -720,6 +744,26 @@ func (c *controller) NewNetwork(networkType, name string, id string, options ... | |||
return nil, err | |||
} | |||
|
|||
// From this point on, we need the network specific configuration, | |||
// which may come form aconfiguration-only network |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from a configuration
@@ -396,13 +396,15 @@ type NetworkResource struct { | |||
Name string // Name is the requested name of the network | |||
ID string `json:"Id"` // ID uniquely identifies a network on a single machine | |||
Created time.Time // Created is the time the network created | |||
Scope string // Scope describes the level at which the network exists (e.g. `global` for cluster-wide or `local` for machine level) | |||
Scope string // Scope describes the level at which the network exists (e.g. `swarm` for cluster-wide or `local` for machine level) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer global
to swarm
. swarm
itself doesn't have clear meaning as a network scope. swarm
could be aware of a network on a local node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed this internally, and the reason we opted for swarm
is because it is easy to associate to the swarm feature. Also in case of single host networks promoted to swarm scope, global
would not be correct because the network does is not a multihost network. It just indicates the network is usable within swarm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dongluochen We already are using the swarm
scope terminology in docker network ls
even though the network is backed by a global-scoped driver. But with this PR, we are also supporting local-scoped drivers, but it can be used in swarm-mode. With the current used terminology and the scope of using the network at a swarm level, I think swarm
scope indicates it appropriately. WDYT ?
api/types/types.go
Outdated
Driver string // Driver is the Driver name used to create the network (e.g. `bridge`, `overlay`) | ||
EnableIPv6 bool // EnableIPv6 represents whether to enable IPv6 | ||
IPAM network.IPAM // IPAM is the network's IP Address Management | ||
Internal bool // Internal represents if the network is used internal only | ||
Attachable bool // Attachable represents if the global scope is manually attachable by regular containers from workers in swarm mode. | ||
Ingress bool // Ingress indicates the network is providing the routing-mesh for the swarm cluster. | ||
ConfigFrom string // ConfigFrom contains the name of the configuration network to be used to configure this network | ||
ConfigOnly bool // ConfigOnly describes whether this is a configuration network |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
configuration network
is not a clear networking concept to me. Maybe consider another name, or add explanation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will try to improve the explanation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aboch @dongluochen ConfigOnly networks are just place-holder networks for network configurations to be used by other networks. These ConfigOnly networks cannot be used directly to run containers or services
. Does that explain it adequately ?
There are issues asking for local load balancing support. Tasks from service A running on a node should access service B instances running on the same node. Does node-local network enable such scenarios? |
@dongluochen This PR addresses the ability to launch services or containers on any node-local network, |
There is probably a better place to ask this question, and so I apologize ahead of time, but, possibly related to @dongluochen's question, and something that has not been clear to me: is the Docker Swarm IPVS L4 VIP-based load balancing specific to the overlay network driver, or is active on any globally-scoped network driver, such as this implementation? I know multi-record round robin DNS queries are working fine with this implementation, in my testing. However, the only references to the IPVS VIP-based load balancing I've found mention its use with overlay, such as shown here - https://success.docker.com/@api/deki/files/203/driver-comparison.png |
As requested, I added the support for running docker services on predefined networks ( Please give it a try.
|
windows-rs1 CI (https://jenkins.dockerproject.org/job/Docker-PRs-WoW-RS1/14121/console) is stuck at the starting point and also Jenkins is acting up and very slow to even get the correct status. @aboch confirmed that windows-rs1 CI was green in his previous attempts (https://jenkins.dockerproject.org/job/Docker-PRs-WoW-RS1/14106/consoleFull) With all other CIs green, I will go ahead with the merge to unblock subsequent PRs. ==== EDIT ===== Windows-RS1 Re-run successful : https://jenkins.dockerproject.org/job/Docker-PRs-WoW-RS1/14131/console |
Hi all. First off: LGTM 👍 🎉 I had great success testing this PR #32981 with ipvlan on CentOS 7.x. The manpages still aren't building in the Makefile, but otherwise it is working great -
isaac.rodman@ubuntu:/home/isaac/go/src/github.com/moby/moby# find contrib/builder/ contrib/builder/ contrib/builder/rpm contrib/builder/rpm/amd64 contrib/builder/rpm/amd64/build.sh contrib/builder/rpm/amd64/README.md contrib/builder/rpm/amd64/generate.sh contrib/builder/rpm/amd64/centos-7 contrib/builder/rpm/amd64/centos-7/Dockerfile
#manpages: ## Generate man pages from go source and markdown # docker build ${DOCKER_BUILD_ARGS} -t docker-manpage-dev -f "man/$(DOCKERFILE)" ./man # docker run --rm \ # -v $(PWD):/go/src/github.com/docker/docker/ \ # docker-manpage-dev
make rpm
---> Making bundle: build-rpm (in bundles/17.06.0-dev/build-rpm) Using test binary docker +++ /etc/init.d/apparmor start Starting AppArmor profiles:Warning from stdin (line 1): /sbin/apparmor_parser: cannot use or update cache, disable, or force-complain via stdin . INFO: Waiting for daemon to start... +++ exec dockerd --debug --host unix:///go/src/github.com/docker/docker/bundles/17.06.0-dev/build-rpm/docker.sock --storage-driver btrfs --pidfile bundles/17.06.0-dev/build-rpm/docker.pid --userland-proxy=true . make: Nothing to be done for 'manpages'. ++ docker build -t dockercore/builder-rpm:centos-7 contrib/builder/rpm/amd64/centos-7/ Sending build context to Docker daemon 2.56kB Step 1/10 : FROM centos:7 7: Pulling from library/centos 343b09361036: Pulling fs layer 343b09361036: Verifying Checksum 343b09361036: Download complete 343b09361036: Pull complete Digest: sha256:bba1de7c9d900a898e3cadbae040dfe8a633c06bc104a0df76ae24483e03c077 Status: Downloaded newer image for centos:7 ---> 8140d0c64310 [... elided ...] Step 10/18 : WORKDIR /root/rpmbuild/SPECS ---> 941f5e39f6b8 Removing intermediate container 9065ac639458 Step 11/18 : RUN tar --exclude .git -r -C /usr/src -f /root/rpmbuild/SOURCES/docker-engine.tar docker-engine ---> Running in b917994182b2 ---> e1657a479ebe Removing intermediate container b917994182b2 Step 12/18 : RUN tar --exclude .git -r -C /go/src/github.com/docker -f /root/rpmbuild/SOURCES/docker-engine.tar containerd ---> Running in c2d757160a23 tar: containerd: Cannot stat: No such file or directory tar: Exiting with failure status due to previous errors The command '/bin/sh -c tar --exclude .git -r -C /go/src/github.com/docker -f /root/rpmbuild/SOURCES/docker-engine.tar containerd' returned a non-zero code: 2 ---> Making bundle: .integration-daemon-stop (in bundles/17.06.0-dev/build-rpm) +++++ cat bundles/17.06.0-dev/build-rpm/docker.pid ++++ kill 3258 ++++ /etc/init.d/apparmor stop Clearing AppArmor profiles cache:. All profile caches have been cleared, but no profiles have been unloaded. Unloading profiles will leave already running processes permanently unconfined, which can lead to unexpected situations. To set a process to complain mode, use the command line tool 'aa-complain'. To really tear down all profiles, run the init script with the 'teardown' option." Makefile:144: recipe for target 'rpm' failed make: *** [rpm] Error 1
#RUN tar --exclude .git -r -C /go/src/github.com/docker -f /root/rpmbuild/SOURCES/docker-engine.tar containerd #RUN tar --exclude .git -r -C /go/src/github.com/docker/libnetwork/cmd -f /root/rpmbuild/SOURCES/docker-engine.tar proxy #RUN tar --exclude .git -r -C /go/src/github.com/opencontainers -f /root/rpmbuild/SOURCES/docker-engine.tar runc #RUN tar --exclude .git -r -C /go/ -f /root/rpmbuild/SOURCES/docker-engine.tar tini #RUN gzip /root/rpmbuild/SOURCES/docker-engine.tar #RUN { cat /usr/src/docker-engine/contrib/builder/rpm/amd64/changelog; } >> docker-engine.spec && tail >&2 docker-engine.spec #RUN rpmbuild -ba --define '_gitcommit 4874e05-unsupported' --define '_release 0.0.20170518.023124.git4874e05' --define '_version 17.06.0' --define '_origversion 17.06.0-dev' --define '_experimental 0' docker-engine.spec ^ removed extra whitespace on rpmbuild line after generated, to paste easier later
docker build -t docker-temp/build-rpm:centos-7 -f bundles/17.06.0-dev/build-rpm/centos-7/Dockerfile.build . docker run -ti docker-temp/build-rpm:centos-7 bash
cd /root/rpmbuild/SPECS
# install manpages #install -d %{buildroot}%{_mandir}/man1 #install -p -m 644 man/man1/*.1 $RPM_BUILD_ROOT/%{_mandir}/man1 #install -d %{buildroot}%{_mandir}/man5 #install -p -m 644 man/man5/*.5 $RPM_BUILD_ROOT/%{_mandir}/man5 #install -d %{buildroot}%{_mandir}/man8 #install -p -m 644 man/man8/*.8 $RPM_BUILD_ROOT/%{_mandir}/man8
%files %doc AUTHORS CHANGELOG.md CONTRIBUTING.md LICENSE MAINTAINERS NOTICE README.md #/%{_bindir}/docker
%doc #/%{_mandir}/man1/* #/%{_mandir}/man5/* #/%{_mandir}/man8/*
# tar --exclude .git -r -C /go/src/github.com/docker -f /root/rpmbuild/SOURCES/docker-engine.tar containerd tar --exclude .git -r -C /go/src/github.com/docker/libnetwork/cmd -f /root/rpmbuild/SOURCES/docker-engine.tar proxy tar --exclude .git -r -C /go/src/github.com/opencontainers -f /root/rpmbuild/SOURCES/docker-engine.tar runc tar --exclude .git -r -C /go/ -f /root/rpmbuild/SOURCES/docker-engine.tar tini gzip /root/rpmbuild/SOURCES/docker-engine.tar # { cat /usr/src/docker-engine/contrib/builder/rpm/amd64/changelog; } >> docker-engine.spec && tail >&2 docker-engine.spec rpmbuild -ba --define '_gitcommit 4874e05-unsupported' --define '_release 0.0.20170518.023124.git4874e05' --define '_version 17.06.0' --define '_origversion 17.06.0-dev' --define '_experimental 0' docker-engine.spec
[root@63168ae7b4ef SPECS]# find /root/rpmbuild/RPMS/x86_64/ -type f /root/rpmbuild/RPMS/x86_64/docker-engine-17.06.0-0.0.20170518.023124.git4874e05.el7.centos.x86_64.rpm /root/rpmbuild/RPMS/x86_64/docker-engine-debuginfo-17.06.0-0.0.20170518.023124.git4874e05.el7.centos.x86_64.rpm
yum -y install container-selinux rpm -i docker-*.rpm
docker network create --config-only --subnet=10.0.0.0/24 --gateway=10.0.0.1 --ip-range=10.0.0.64/26 -o parent=ens192 test_config_only # node 1
docker network create --config-only --subnet=10.0.0.0/24 --gateway=10.0.0.1 --ip-range=10.0.0.128/26 -o parent=ens192 test_config_only # node 2
docker network create --config-only --subnet=10.0.0.0/24 --gateway=10.0.0.1 --ip-range=10.0.0.192/26 -o parent=ens192 test_config_only # node 3
docker network create -d ipvlan --scope=swarm --config-from test_config_only --attachable swarm_test docker deploy -c /root/docker-compose-v3-test.yml voting
version: "3" services: redis: image: redis:3.2-alpine ports: - "6379" networks: - votingnet deploy: placement: constraints: [node.role == manager] db: image: postgres:9.4 volumes: - db-data:/var/lib/postgresql/data networks: - votingnet deploy: placement: constraints: [node.role == manager] voting-app: image: gaiadocker/example-voting-app-vote:good ports: - 5000:80 networks: - votingnet depends_on: - redis deploy: mode: replicated replicas: 2 labels: [APP=VOTING] placement: constraints: [node.role == worker] result-app: image: gaiadocker/example-voting-app-result:latest ports: - 5001:80 networks: - votingnet depends_on: - db worker: image: gaiadocker/example-voting-app-worker:latest networks: votingnet: aliases: - workers depends_on: - db - redis # service deployment deploy: mode: replicated replicas: 2 labels: [APP=VOTING] # service resource management resources: # Hard limit - Docker does not allow to allocate more limits: cpus: '0.25' memory: 512M # Soft limit - Docker makes best effort to return to it reservations: cpus: '0.25' memory: 256M # service restart policy restart_policy: condition: on-failure delay: 5s max_attempts: 3 window: 120s # service update configuration update_config: parallelism: 1 delay: 10s failure_action: continue monitor: 60s max_failure_ratio: 0.3 # placement constraint - in this case on 'worker' nodes only placement: constraints: [node.role == worker] networks: votingnet: external: name: swarm_test volumes: db-data:
[root@centos7swarmtest01 ~]# docker service ps --no-trunc voting ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 51ebcxozsf8clwd5c3gj10l6c voting_worker.1 gaiadocker/example-voting-app-worker:latest centos7swarmtest02 Running Running 29 minutes ago 7ubvm4znlk2lls2sbw2orri7b voting_result-app.1 gaiadocker/example-voting-app-result:latest centos7swarmtest02 Running Running 29 minutes ago znz47wst93kfic0lxlgdteoz6 voting_voting-app.1 gaiadocker/example-voting-app-vote:good centos7swarmtest03 Running Running 30 minutes ago r2f8r28z6nxe5bbk5t7wl2rbc voting_db.1 postgres:9.4 centos7swarmtest01 Running Running 29 minutes ago 7xe7da400oycj3w0qs48d8xbk voting_redis.1 redis:3.2-alpine centos7swarmtest01 Running Running 30 minutes ago F438 zg97amwo70q800bgs11pe2h1p voting_worker.2 gaiadocker/example-voting-app-worker:latest centos7swarmtest03 Running Running 29 minutes ago xeh62taucdw6uso3nrebogfp6 voting_voting-app.2 gaiadocker/example-voting-app-vote:good centos7swarmtest02 Running Running 29 minutes ago
[root@centos7swarmtest01 ~]# MAC='4f'; echo 'Host - '; ip addr show ens192 | grep "$MAC"; echo 'Containers -'; for CID in $(docker ps -a -q); do echo -n "$CID:"; docker exec -ti $CID ip addr | grep "$MAC"; docker inspect $CID | grep 'IPAddress'; done Host - link/ether 00:50:56:bc:74:4f brd ff:ff:ff:ff:ff:ff Containers - 4e50f691d32c: link/ether 00:50:56:bc:74:4f brd ff:ff:ff:ff:ff:ff "SecondaryIPAddresses": null, "IPAddress": "", "IPAddress": "10.0.0.65", 3bbc02a6a682: link/ether 00:50:56:bc:74:4f brd ff:ff:ff:ff:ff:ff "SecondaryIPAddresses": null, "IPAddress": "", "IPAddress": "10.255.0.6", "IPAddress": "10.0.0.64",
[root@centos7swarmtest02 isaac.rodman]# MAC='7f'; echo 'Host - '; ip addr show ens192 | grep "$MAC"; echo 'Containers -'; for CID in $(docker ps -a -q); do echo -n "$CID:"; docker exec -ti $CID ip addr | grep "$MAC"; docker inspect $CID | grep 'IPAddress'; done [root@centos7swarmtest02 isaac.rodman]# MAC='7f'; echo 'Host - '; ip addr show ens192 | grep "$MAC"; echo 'Containers -'; for CID in $(docker ps -a -q); do echo -n "$CID:"; docker exec -ti $CID ip addr | grep "$MAC"; docker inspect $CID | grep 'IPAddress'; done Host - link/ether 00:50:56:bc:7b:7f brd ff:ff:ff:ff:ff:ff Containers - 7d60ddb820dc: link/ether 00:50:56:bc:7b:7f brd ff:ff:ff:ff:ff:ff "SecondaryIPAddresses": null, "IPAddress": "", "IPAddress": "10.255.0.11", "IPAddress": "10.0.0.130", 736be9a13208: link/ether 00:50:56:bc:7b:7f brd ff:ff:ff:ff:ff:ff "SecondaryIPAddresses": null, "IPAddress": "", "IPAddress": "10.0.0.129", 65364fde80c1: link/ether 00:50:56:bc:7b:7f brd ff:ff:ff:ff:ff:ff "SecondaryIPAddresses": null, "IPAddress": "", "IPAddress": "10.255.0.9", "IPAddress": "10.0.0.128",
[root@centos7swarmtest03 isaac.rodman]# MAC='fb'; echo 'Host - '; ip addr show ens192 | grep "$MAC"; echo 'Containers -'; for CID in $(docker ps -a -q); do echo -n "$CID:"; docker exec -ti $CID ip addr | grep "$MAC"; docker inspect $CID | grep 'IPAddress'; done Host - link/ether 00:50:56:bc:43:fb brd ff:ff:ff:ff:ff:ff Containers - 3d4051304e0f: link/ether 00:50:56:bc:43:fb brd ff:ff:ff:ff:ff:ff "SecondaryIPAddresses": null, "IPAddress": "", "IPAddress": "10.0.0.193", c48496f97e40: link/ether 00:50:56:bc:43:fb brd ff:ff:ff:ff:ff:ff "SecondaryIPAddresses": null, "IPAddress": "", "IPAddress": "10.255.0.8", "IPAddress": "10.0.0.192",
🎉 Cheers!
systemctl stop docker rm -rf /var/lib/docker/* vgremove -y docker vgcreate docker /dev/sdb echo y | lvcreate --wipesignatures y -n thinpool docker -l 95%VG echo y | lvcreate --wipesignatures y -n thinpoolmeta docker -l 1%VG lvconvert -y --zero n -c 512K --thinpool docker/thinpool --poolmetadata docker/thinpoolmeta cat > /etc/lvm/profile/docker-thinpool.profile <<'EOF' activation { thin_pool_autoextend_threshold=80 thin_pool_autoextend_percent=20 } EOF lvchange --metadataprofile docker-thinpool docker/thinpool # Verify the LV "thinpool" if lvs -o+seg_monitor docker | grep 'thinpool' | grep 'monitored'; then echo '* docker thinpool is monitored; SUCCESS' else echo '* docker thinpool is not monitored; verify before starting Docker' fi cat > /etc/docker/daemon.json <<'EOF' { "experimental": true, "storage-driver": "devicemapper", "storage-opts": [ "dm.thinpooldev=/dev/mapper/docker-thinpool", "dm.use_deferred_removal=true", "dm.use_deferred_deletion=true" ] } EOF systemctl start docker
|
@eyz thanks a lot for validating this PR. The logs you posted will be very helpful for others to follow. |
@thaJeztah I'm still waiting for the completions to be moved to docker/cli. |
I believe I found a bug with the combination of --config-only and --ipam-driver (see below):
See: ipam-driver cannot be used with config-only network #33415 |
hi Thanks |
@smakam when using |
@thaJeztah Thanks for the response. Yes, I understand that we cannot do cross-node communication with bridge network. What I see is that Swarm is not aware of node local network or node global network(dont know if its right term) and it can schedule services across nodes and connectivity will not work. Do we take care of this by specifying constraints so that services in node local network get scheduled in same node or will Swarm be able to do it automatically if its node local network? |
Not quite the right place to ask, but it's related to this feature: Is there a way for containers of a service to be present on both the host network and an overlay network? My use case is that I have a HTTP server doing reverse-proxing and tls termination of several services. The problem is that if I put my service in the overlay and publish the port, then there is no way to know the IP of the remote client #25526. But if I put this service in the host network, then I can address all other services to relay the traffic.
Look's like a dead end. |
I completely got lost here. Is it possible to create a docker service with a (--net=host) as it used to be? I don't want to create any overlay networks. |
Keep in mind that the GitHub issue tracker is not intended as a general support forum,
I'm locking the discussion on this PR. If you think there's a bug in this feature, please open a bug report |
This PR makes changes to support creating swarm networks with macvlan driver, or with any other local datascope driver
capable of creating multihost networks. Edit: Now also over the defaultbridge
andhost
networks.See #27082
If the network driver is also capable of creating networks which provide connectivity across hosts (as macvlan driver for example), then the service discovery will be provided as it is done for global datascope drivers like overlay.
In order to achieve that, it adds three new flags to the network create API:
multihost
scope=swarm
,config-only
,config-from
.While a global datascope driver network is by default created as a swarm network (see overlay nw as example), we need an option to make a local scope driver network creation a swarm network creation.
multihost
scope=swarm
option is for that. This is needed to ensure both regular and swarm networks from the same driver can coexist.Edit: Note: I am now thinking this flag should be renamed
multihost
-->scope=swarm
as its actual semantic is "I want this network to be a swarm visible network"For local datascope driver networks, resource allocation has to happen on the host, it cannot happen in the swarm manager, because the resources are tied to the underlay network and/or (as for macvlan) to the host network interfaces, so they are different in each swarm node. Once the dynamic swarm network creation request is sent to libnetwork, libnetwork on each swarm node needs to know from where to pick the local scope (therefore node specific) configuration for the network.
This is fine for simple local scope single host driver networks, which do not require specific configurations and are fine with the default IPAM resources. But for local scope, multihost capable driver networks, there is a need to provide a global identification of the docker network across the swarm nodes, being a multihost network entity, besides the need for host specific network configurations, like the parent interface for macvlan networks for example.
In order to solve this, user is given the ability to create
config-only
networks which contain network specific configurations on each host. This kind of network is saved to store but no allocation or plumbing is attempted for it. Theconfig-from
option comes to play when the client creates the swarm network and wants to specify which configuration network will provide the node local network specific configurations.Now user can create a, say, macvlan swarm network toward the swarm manager specifying that the configuration information comes from another network which will be present on the nodes, with
after he has created the config networks on the swarm nodes with:
Note:
command reference and swagger changes, integration tests will be added as design review proceeds.
Note: If a node-local multihost capable driver is capable to autonomously allocate its resources on each node, then there is no need for the user to create a
config-only
network and pass it during the swarm network creation asconfig-from
Closes #27082
Fixes docker does not respect previously setup FORWARD rules #29184
Fixes Docker should not update FORWARD chain on startup #23987
Related to#24848