AWS + DCHP Docker Containers

Update - 1st March 2018

I just want to warn others that this post is quite old and I think there are probably much better ways of handling this now. A whole field has risen up around docker networking and tools are out there to make this much easier, such as docker swarm, Rancher and Kubernetes. I would recommend studying those areas instead. One of the easiest ways to get kubernetes up and running when you self-host may actually be through Rancher.

The default way of creating docker containers is to use a bridge with a host-only subnet provided by the docker0 or lxcbr0 bridges. However, this makes it incredibly difficult or impossible for containers on different hosts to communicate. This tutorial will show you how to deploy containers onto the same subnet as the host with DHCP or static IPs, so that you can deploy containers to any node, yet still have them communicate with each other.

We will be deploying onto AWS EC2 instances which requires us to NAT the bridge in order for our containers to be able to gain internet access. If you are not deploying to the AWS network, then you can skip all steps that involve iptables
    Create a network interface by going to EC2 -> Network Interfaces -> Create Network Interface.
    Assign it to the subnet you wish to deploy on, and choose a a single private IP. You will also need to choose a security group.
    When choosing a private IP, make sure to choose one that has a few IPs "around" it that are also spare. We will add these later so our dhcp server has a single ip "block" to dish out.
    Select the new network interface and click Actions -> Manage Private IP Addresses. Then add more IPs sequentially around the IP you chose in the previous step.
    Create an elastic IP and assosciate it with the lowest private IP on the newly created network interface.
    Create an EC2 instance (Ubuntu in this tutorial), and choose the subnet you chose earlier, before then being able to select the network interface you just created. Do not add the network interface in addition to the default one that is allocated. You should now get a message stating that you cannot be allocated a public IP. This is because a public IP from your elastic IPs has already been allocated to that network interface
    Log into your new EC2 instance and install docker and then install lxc.
    Update /etc/default/docker so that there is the line
    DOCKER_OPTS="-e lxc --dns"
    Replace the contents of
    # The primary network interface
    auto eth0
    iface eth0 inet dhcp
    auto br0
    iface br0 inet dhcp
            bridge_ports eth0
            bridge_stp off
            bridge_fd 0
            bridge_maxwait 0
    Run the following commands to set up routing:
    # bring up the bridge we just created
    sudo ifup br0
    # set up routing (aws specific)
    sudo iptables -t nat -F
    sudo iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
    sudo iptables --append FORWARD --in-interface br0 -j ACCEPT
    sudo iptables -t nat -A POSTROUTING -d -j ACCEPT
    sudo iptables -t nat -A POSTROUTING -d -j SNAT --to-source $HOST_PRIVATE_IP
    Using masquerade instead of SNAT will not work, so don't try! was my internal subnet for the AWS VPC, however it may not be for you.

    If you one of your containers is acting as a reverse proxy, you will want to run append these commands as well.

    Run the following commands to enable forwarding.
    echo 1 > /proc/sys/net/ipv4/ip_forward
    sed -i "s;$SEARCH;$REPLACE;" $FILEPATH
    sudo sysctl -p
    If you want to deploy your containers with dynamic IPs then install and configure dnsmsasq to act as our DHCP server for the node
    sudo apt-get install dnsmasq -y
    sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.bak
    sudo vim /etc/dnsmasq.conf
    Replace the contents of

    No DHCP Configuration

    If you don't want to use DHCP, then you simply need to start your containers similarly to below (but you will need to keep track of the IPs of every container)

    docker run \
    --net="none" \
    --lxc-conf=" = veth" \
    --lxc-conf=" = $IP_OF_CONTAINER/$CIDR" \
    --lxc-conf=" = $HOST_PRIVATE_IP" \
    --lxc-conf=" = wan" \
    --lxc-conf=" = eth123" \
    --lxc-conf=" = up" \
    -d $IMAGE_ID

    Restart dnsmasq for the changes to take effect
    sudo service dnsmasq restart
    If you run out of IPs because they are all currently leased to containers that didn't shut down gracefully, you can just empty the leases file is at:
    If you are using DHCP for your ubuntu containers, you will need to add the following line to your dockerfile [docker bug report]
    RUN mv /sbin/dhclient /usr/sbin/dhclient
    Now start your container similarly to below:
    docker run \
    -d \
    --privileged \
    --net="none" \
    --lxc-conf=" = veth" \
    --lxc-conf=" = br0" \
    --lxc-conf=" = up" \
    Your container will need to automatically run the command
    sudo dhclient eth0
    to grab an IP from the DHCP server. I have this in a startup script that is called from the CMD option in the container's dockerfile and use
    cron -f
    as the last line in the startup script to "tie up" the container's foreground process.
    Your container should now have one of the private IPs that you allocated the network interface earlier. It should also have NAT'd internet access.


  1. At the exact moment when I'm trying to reboot after making changes to interfaces, I lose connectivity. Any ideas? Please help :)

  2. This comment has been removed by a blog administrator.

    1. I could not see how the comment was related to this post so it was removed. It looked like it was just there to link to other content. However I am more than happy for people to link to other related content if it is relevant to the content of the post.