Docker - Run Multiple Docker Websites On The Same Host


One cannot run multiple containers that use the same port, on the same host. This problem has bothered me ever since I started using docker (way back when it was version 0.7). This is a big issue for me since most of the projects I work on are web applications which all want to use port 80/443 by default, and I can't expect my users to remember to manually specify random port numbers at the end of the URL.

Updated Solution

Go here to get a more up-to-date and better solution than the one outlined in this post.


We are going to provide each container with their own public IP address on the same subnet as the host.

Since IPv4 addresses are becoming sparse, this will not work on Amazon/Rackspace/Digital Ocean where they only allow 1 public IP per virtual machine. If this is only for websites, you need to use a website reverse proxy instead.


Create a Bridge

Bridges are like routers, except that they redirect packets based on the MAC address rather than the IP address. This is great because it means we are not going to have to create forwarding rules in iptables for each IP of the docker containers.

editor /etc/network/interfaces

Make it look something like below.
auto eth0
iface eth0 inet static

# bridge for the docker containers network to connect to main
auto wan
iface wan inet static
        address [HOST IP HERE]
        netmask [NETMASK HERE e.g.]
        gateway [GATEWAY IP e.g. or]
        dns-nameservers [nameserver IPs e.g.]
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0
You may need to change eth0 to eth1 etc if you have a different setup
You can call your bridge something other than
, just make sure to take that into account in the rest of the tutorial.

Create Docker Start Script

Normally when you deploy a container, it is something like below which can be easily typed into the terminal:

docker run -d -p 80:80 [my-image]
However, you will need to use the following configuration, so I suggest you create a script that you can call later.
docker run \
--net="none" \ \
--lxc-conf=" = veth" \
--lxc-conf=" = [docker container ip]/[cidr]" \
--lxc-conf=" = [gateway ip]" \
--lxc-conf=" = wan" \
--lxc-conf=" = eth0" \
--lxc-conf=" = up" \
-d [Docker Image ID]
You can change eth0 to be somethng like eth123, it won't make any difference. It just means that your interface will appear as eth123 from inside the container.
My example:
docker run \
--net="none" \ \
--lxc-conf=" = veth" \
--lxc-conf=" =" \
--lxc-conf=" =" \
--lxc-conf=" = wan" \
--lxc-conf=" = eth123" \
--lxc-conf=" = up" \
-d `docker images -q | sed -n 1p`
The above will create a container that is publicly accessible at

You no longer need to worry about specifying ports, however you are going to have to keep track of your IP's which is easily done by creating a single deployment script per container that you want to run.

Virtualbox Debugging Note

If you are testing this using a host within Virtualbox and you find out that your containers do not have internet access, please make sure that you have set up your network for the VM as follows (see the Promiscuous Mode setting):


No comments:

Post a Comment