-
Notifications
You must be signed in to change notification settings - Fork 18.8k
RHEL7/CentOS7 cannot reach another container published network service #32138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Was |
I forgot to give credits to the person who solved the problem: Nena on StackOverflow. |
Hi @thaJeztah Nope, firewalld was not restarted. Between step 1 and step 2, nothing is done on the machine. It is possible to have step 0 |
Oh, sorry for the confusion; I meant the docker daemon; i.e. if you The docker daemon creates certain IPTables rules when the service is started; if |
No problem, thanks for helping :-) I know I cannot do on the main server the restart of Docker because it is configured to restart all container (We are running CentOS 7.3 so we have this bug where we need to set mounts as private in the systemd unit in order to avoid leaking LVM mounts, but this implies that we cannot use live restore). However, I can do that on my KVM instance (virtual machine), on which nothing important is running. So on this VM, I'm also running CentOS 7.3 with all updates applied (as of this weekend), the setup is similar than on the other host where we run Docker 17.03.0-ce. The only difference is that the storage driver is overlay instead of LVM. So when I do this after a clean reboot:
This fails with |
Thanks for taking the time to try that. I just tried to reproduce on a fresh CentOS 7.3 droplet on DigitalOcean, but was not able to reproduce 😢. To exclude possibilities;
|
So I did the same as you. I created a CentOS 7 droplet, then I did a distro-sync and made sure firewalld is activated before rebooting:
Then I installed Docker using these instructions: https://store.docker.com/editions/community/docker-ce-server-centos?tab=description Then I created the following container:
Using the public IP of my droplet, I was able to verify that I can access the HTTP page using my web browser (it displays the welcome to nginx static page, so Docker configured correctly the firewall, cool!). Now I create another container
8000
and I pass my droplet public IP (named
And I get the My guess why you could not reproduce it is: CentOS 7 droplets have firewalld disabled by default. You need to activate it. |
Hi @thaJeztah I realised that my previous post was not clear enough, so regarding your 3 points:
I hope that given the instructions in my previous post, you can also reproduce it on your end. Thanks for your support btw! |
Hi @thaJeztah Did you manage to reproduce it using my updated instructions? |
Hello @thaJeztah I did some further investigation. I did a new test using a Ubuntu droplet. I set it up like I did for the CentOS one, but of course using apt instead of yum, etc. To make the test more relevant, I activated
Then I installed Docker and run the same docker container (as described for CentOS). But instead of getting host unreachable right away. After a long period (ca. a minute) I get a timeout. So here again the firewall is blocking inter-container communication when using a public IP. To solve that, on CentOS one can use the command in my original issue post or refer to here https://serverfault.com/questions/684602/how-to-open-port-for-a-specific-ip-address-with-firewall-cmd-on-centos if one wants to restrict which source IP can connect to the opened port (e.g. giving the docker network IP range as source). e.g.
Or you can do (it is similar to the above)
On Ubuntu do:
So it really depends on how the firewall is configured in the first place. Perhaps Docker could make sure that Docker containers can connect to each others when using the external IP address by creating these rules automatically. |
Hello @thaJeztah Any news on this front? How can I help further on this topic? |
Ping! |
Any news on this ? |
Hi @whgibbo I haven't received any news, and I'm still using the proposed work around. |
Problem still exists for docker version The basic idea is: containers should be able to access published ports as other hosts do on the Internet. It's unreasonable to block containers from accessing public ports. |
@jcberthon I'm trying to propose a PR to resolve this issue since I think adding interface
Also,
|
Hi @vizv Sorry for the long delay. I haven't had a lot of free time this last 2 weeks and spent more time with my family. I will create again a droplet with the configuration as I described it and send you the information. |
I have the same problem, tried multiple workarounds, nothing worked yet. |
@vedmant Have you tried apply this patch moby/libnetwork#1963 and recompile docker? Or you can manually fix iptables rules |
@vizv I wanted to stick to official build to be able to update it regularly easily. All I need is to close all public ports on machine except a few like ssh, http, https. But keep Docker containers able to connect to database that works on the host machine. Is there some other possible way other than compiling docker manually? |
@vedmant as I mentioned, you can delete related entries in iptables |
Hey guys!
On your case @jcberthon try this:
|
@eltonplima save my day i trying 6h to fix it! THANKS! But in my case i need to use: |
@eltonplima thank you for the hint. It is similar to my own workaround, see #32138 (comment) the example I gave was to limit access to the service, but of course you can open it fully. |
Hi there! I had same issue. I resoleved !
if not work that try this.
Hope it is help your issue. |
Any updates on this? I am experiencing this problem on a system that doesn't use I have a mail server and several other services running on a host. The mail server runs in a different docker bridge network than the other services, but publicly exposes its ports. The other services should be able to access the mail server using the publicly exposed ports, but no connection can be made. |
Uh oh!
There was an error while loading. Please reload this page.
Description
On CentOS 7 and RHEL 7 (and possibly any Linux OS using
firewalld
) we have the following problem that when at least 2 containers are running on the same host, one container cannot access the network services offered by the other by using the "external" IP address or hostname. The error message returned ishost unreachable
orno route to host
.Note that some times even when setting a "hostname" to a docker container (via the
--hostname
option), and being able to ping that hostname from another container (the hostname is then resolved to the Docker internal IP), it might still not work because some applications (e.g. gitlab-runner) are still resolving the given hostname using the external DNS resolver and not the one of the Docker network. Weird but true.Someone reported already the problem (#24370) but did not provide enough information, and thus the issue was closed. I have all necessary information, and I can provide more on demand.
Steps to reproduce the issue:
I have found a series of steps that are easy to do by anyone and can reproduce the problem. It is assuming that in your home directory you have a
html-pub
folder in which there is a staticindex.html
file (mkdir ~/html-pub
then download and simple HTML static file from the internet and put it in that folder). All commands are run on the host where Docker 17.03 is running.It is also assumed that the IP address of the host is
192.168.1.2
.Describe the results you received:
On CentOS 7 with firewalld installed I receive this:
Describe the results you expected:
On Ubuntu without firewalld (but still with a firewall), I get this:
Additional information you deem important (e.g. issue happens only occasionally):
On CentOS 7, doing the following solved the problem. But I would expect the
docker run
command to do those extra steps as I used the-p
flag.Note: The above command are for testing. If one wants them permanent, one needs to add the
--permanent
flag to both commands and then executesudo firewall-cmd --reload
.Update 20170330: actually only the second command is enough, adding docker0 to the trusted zone has no effect.
The above is a dummy example. A real life test case where this is failing us is when running on the same host the GitLab and GitLab Runner containers. We had to use a different hostname for the
docker run
command than the real hostname users use to access our own internal instance of GitLab in order for the gitlab-runner toregister successfully. But then when trying to use that runner, it cannot clone the repository, GitLab provides the "external" FQDN for the repository the runner should clone, and the runner fails before even starting the job because the host is unreachable. The nginx example is therefore relevant and a much easier way of demonstrating the issue.Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.):
The host runs CentOS 7.3 on bare-metal (so physical). But I have also reproduced it inside a VM (KVM) during the investigation.
I also tried the above (and got the expected result) on a Ubuntu 16.04 LTS running the 4.8 HWE kernel, this is also a bare-metal machine, x86_64 too but with only 2 CPUs and 8GiB RAM, the storage driver is btrfs.
The text was updated successfully, but these errors were encountered: