Docker Networking

Upon installation, Docker creates a bridge network called docker0:

root@ubuntu-local:~# ifconfig  docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:82:9b:91:ac  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

root@ubuntu-local:~# ip a s docker0
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:82:9b:91:ac brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
root@ubuntu-local:~#

To list all docker networks, run docker network ls:

root@ubuntu-local:~# docker network ls
NETWORK ID     NAME                                    DRIVER    SCOPE
26af7973bea1   502-server-with-configuration_default   bridge    local
cbd850d236c0   bridge                                  bridge    local
67c9f0c3763f   host                                    host      local
81452ec3851a   none                                    null      local
4588d76cb760   panoramic_default                       bridge    local
a1b42801c233   server-with-dependency_default          bridge    local
6a54975dfe4c   wordpress-compose_default               bridge    local

docker0 has an IP address of 172.17.0.1. By default, containers will receive an IP from this subnet.

Exercise 6.01 - Hands-On with Docker Networking

A container will always exist in a Docker network. This network will determine to what resources should the container get access.

The functionality of the network is determine based on the network driver being use for a particular container. The default driver is bridge. For example, containers in the same subnet are able to talk to each other, or even allows outgoing connections to reach the internet.

In this exercise, you will deploy 2 web servers(Apache2 and NGINX). Both containers will expose their ports in different scenarios, and then you will access the exposed ports and learn about how docker networking works at a basic level.

  1. Create an NGINX web server with the name webserver1 by starting it in the background:
root@ubuntu-local:~# docker run -d --name webserver1 nginx
567bd594d44521af051d4386ff7693d5f0d8e6992310ba1a7665aafeab4da24d
  1. Execute docker ps to check whether the container is up and running:
root@ubuntu-local:~# docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED          STATUS          PORTS   NAMES
567bd594d445   nginx       "/docker-entrypoint.…"   46 seconds ago   Up 40 seconds   80/tcp  webserver1
  1. Execute docker inspect to check what networking config this container has by default(Networks sub-section under NetworkSettings):
"Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "cbd850d236c0ad94de0cd3bf3af75ffcf7687349fb5996f7bfe52d4aaf733677",
                    "EndpointID": "1d46c85a9404e5426f6fee57898f0c293affa93ef5ec02f957bc4ea4058619ed",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }

The Gateway has the IP address of the docker0 bridge network. The container IP is 172.17.0.2. The docker0 interface will be the bridge between the underlying host and the container to send and receive traffic.

The container is also exposing port 80 by default:

"Config": {
            "Hostname": "567bd594d445",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "80/tcp": {}
            },

Since docker0 is the bridge between the underlying host and the container, webserver1 can be accessed by IP address:

root@ubuntu-local:~# curl -s http://172.17.0.2 | grep -i title
<title>Welcome to nginx!</title>
  1. As you can see, by default the underlying host can talk to containers via IP addresses. To make a container available to other servers or users, you have to map a port on your host to an exposed port on the container. Let’s create another web server with the name of webserver2 and map port 80 to 8080 on our host:
root@ubuntu-local:~# docker run -d -p 8080:80 --name webserver2  nginx
09c220760783cc9c509e202471f658431caaf950bff7e38ad2a71d8195c7fa4c
  1. Both webserver1, and webserver2 are up and running:
root@ubuntu-local:~# docker ps | grep webserver
09c220760783   nginx       "/docker-entrypoint.…"   4 minutes ago   Up 4 minutes   0.0.0.0:8080->80/tcp webserver2

567bd594d445   nginx       "/docker-entrypoint.…"   2 hours ago     Up 2 hours     80/tcp webserver1

Note: The host machine port is on the left 8080, and the container port on the right 80.

  1. webserver2 can be accessed via localhost:
root@ubuntu-local:~# curl -s http://localhost:8080 | grep -i title
<title>Welcome to nginx!</title>
  1. Now we have 2 instances of NGINX in the same docker network with slightly different network configs. webserver1 is running without any ports exposed. webserver2 exposed port 80 via 8080 on our host:
"NetworkSettings": {
            "Bridge": "",
            "SandboxID": "828563b55102edd06ba77cc3b61bb66bfe31c713150c9504d7f8c64173385b53",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "80/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "8080"
                    },
                    {
                        "HostIp": "::",
                        "HostPort": "8080"
                    }
                ]
            },
         
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "cbd850d236c0ad94de0cd3bf3af75ffcf7687349fb5996f7bfe52d4aaf733677",
                    "EndpointID": "b37a4df72f8f5e235dcccddda4816281f123bb6f44c629de41409fa1a82bd065",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:03",
                    "DriverOpts": null
                }

Its IP address is 172.17.0.3. Both lives in the same Bridge network.

  1. You can test the communication between the containers. Get a shell on webserver1, install ping, and ping webserver2:
root@ubuntu-local:~# docker exec -it webserver1 /bin/bash
root@567bd594d445:/#
root@567bd594d445:/# apt-get update -y > /dev/null && apt-get install inetutils-ping -y > /dev/null
debconf: delaying package configuration, since apt-utils is not installed
root@567bd594d445:/#
root@567bd594d445:/# ping -c 3 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: icmp_seq=0 ttl=64 time=2.009 ms
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.217 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.169 ms
--- 172.17.0.3 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.169/0.798/2.009/0.856 ms
  1. Install curl and browse to port 80 on webserver2:
root@567bd594d445:/# apt-get install curl -y > /dev/null
debconf: delaying package configuration, since apt-utils is not installed
root@567bd594d445:/#
root@567bd594d445:/#
root@567bd594d445:/# curl -s 172.17.0.3 | grep -i title
<title>Welcome to nginx!</title>

We successfully deployed 2 NGINX web servers. You configured one webserver to not expose any ports outside the default Docker network, while you the second instance to run on the same default network but expose port 80 via 8080 on your underlying Docker host. Both containers were able to communicate with other via a regular browser or curl.

Docker DNS

With DNS , we no longer have to look up a container by its IP address. With human-readable names, communication between containers is more reliable.

The quick and dirty method is to establish a link between the containers with the --link option when deploying a container. This method creates an entry in the linked container’s /etc/hosts file.

For pure DNS, you have to create a new network, and the --network-alias is used with docker run to provide a domain name.

Exercise 6.02 - Working with Docker DNS

You will first enable simple name resolution using the legacy link method. You will contrast this approach by using the newer and more reliable native Docker DNS service.

  1. You’ll create two Alpine Linux containers. Both containers will use the default bridge network, and the second container will be linked to the first one. Creating the first container with the name of containerlink1:
root@ubuntu-local:~# docker run -itd --name containerlink1 alpine
172036390c4b9c1b37eead1f74985fa893a4b66671d76a9cc79a59c2a799c753
  1. Creating the second container called containerlink2 with a link to containerlink1:
root@ubuntu-local:~# docker run -itd --name containerlink2 --link containerlink1 alpine
e419fef30745d7ca81ececa18262eaea76a6661aec9fe179b05771a4037721f5
  1. Access a shell on containerlink2 and ping containerlink1:
root@ubuntu-local:~# docker exec -it containerlink2 /bin/sh
/ #
/ #
/ # ping -c 3 containerlink1
PING containerlink1 (172.17.0.4): 56 data bytes
64 bytes from 172.17.0.4: seq=0 ttl=64 time=7.972 ms
64 bytes from 172.17.0.4: seq=1 ttl=64 time=0.203 ms
64 bytes from 172.17.0.4: seq=2 ttl=64 time=0.194 ms

--- containerlink1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.194/2.789/7.972 ms
  1. containerlink1 is reachable by its domain name because there’s an entry in the /etc/hosts of containerlink2:
/ # cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.17.0.4	containerlink1 172036390c4b
172.17.0.5	e419fef30745
  1. Do the same thing on containerlink1:
root@ubuntu-local:~# docker exec -it containerlink1 /bin/sh
/ #
/ # ping -c 3 containerlink2
ping: bad address 'containerlink2'

containerlink1 has no idea if containerlink2 exists because no hosts entry has been created.

  1. Create a network called dnsnet, and spawn 2 Alpine containers within that network. Use docker network create to create a new docker network with 192.168.56.0/24 subnet and set the gateway to 192.168.54.1:
root@ubuntu-local:~# docker network create dnsnet --subnet 192.168.54.0/24 --gateway 192.168.54.1

1de21832511ba522d9601b0657577fe6beb7a19535e3006ce400440634eeaca3
  1. List the available networks:
root@ubuntu-local:~# docker network ls
NETWORK ID     NAME                                    DRIVER    SCOPE
26af7973bea1   502-server-with-configuration_default   bridge    local
85e54c6dcd8d   bridge                                  bridge    local
1de21832511b   dnsnet                                  bridge    local
67c9f0c3763f   host                                    host      local
81452ec3851a   none                                    null      local
4588d76cb760   panoramic_default                       bridge    local
a1b42801c233   server-with-dependency_default          bridge    local
6a54975dfe4c   wordpress-compose_default               bridge    local
  1. Inspect dnsnet and pay attention to the driver, subnet, and gateway:
root@ubuntu-local:~# docker network inspect dnsnet
[
    {
        "Name": "dnsnet",
        "Id": "1de21832511ba522d9601b0657577fe6beb7a19535e3006ce400440634eeaca3",
        "Created": "2021-06-19T06:20:03.431785116Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.54.0/24",
                    "Gateway": "192.168.54.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
root@ubuntu-local:~#

As you can see the Driver is set to Bridge by default. Since its a Bridge, docker will create an interface for the network:

br-1de21832511b: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.54.1  netmask 255.255.255.0  broadcast 192.168.54.255
        ether 02:42:ff:07:60:86  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  1. Start an Alpine container within this network with both name and network alias as alpinedns1:
root@ubuntu-local:~# docker run -itd --network dnsnet --network-alias alpinedns1 --name alpinedns1 alpine
08fd3e755699ab68ab899f4886c339d533ada94bbcae80dc88c08e015cfbfd25
  1. Start a second wth name and network alias as alpinedns2:
root@ubuntu-local:~# docker run -itd --network dnsnet --network-alias alpinedns2 --name alpinedns2 alpine
6d124134be0974d0da5555d6ba8a428bdd420359e59f8c15d841541885599cc1
  1. Verify if the containers are up and running:
root@ubuntu-local:~# docker ps
CONTAINER ID   IMAGE     COMMAND     CREATED          STATUS              PORTS     NAMES
6d124134be09   alpine    "/bin/sh"   51 seconds ago   Up 48 seconds                 alpinedns2
08fd3e755699   alpine    "/bin/sh"   2 minutes ago    Up About a minute             alpinedns1
  1. Verify the IP and gateway of both containers:
root@ubuntu-local:~# docker inspect alpinedns1 | egrep -i "ipaddress|gateway" | tail -n 3
                    "Gateway": "192.168.54.1",
                    "IPAddress": "192.168.54.2",
                    "IPv6Gateway": "",
root@ubuntu-local:~#
root@ubuntu-local:~# docker inspect alpinedns2 | egrep -i "ipaddress|gateway" | tail -n 3

                    "Gateway": "192.168.54.1",
                    "IPAddress": "192.168.54.3",
                    "IPv6Gateway": "",
  1. Get a shell inside alpinedns1 and ping alpinedns2:
root@ubuntu-local:~# docker exec -it alpinedns1 /bin/sh
/ #
/ # ping -c 4 alpinedns2
PING alpinedns2 (192.168.54.3): 56 data bytes
64 bytes from 192.168.54.3: seq=0 ttl=64 time=0.802 ms
64 bytes from 192.168.54.3: seq=1 ttl=64 time=0.236 ms
64 bytes from 192.168.54.3: seq=2 ttl=64 time=0.251 ms
64 bytes from 192.168.54.3: seq=3 ttl=64 time=0.204 ms

--- alpinedns2 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.204/0.373/0.802 ms
  1. View the content of /etc/hosts to reveal that Docker is using true DNS as opposed to /etc/hosts entries:
/ # cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
192.168.54.2	08fd3e755699
  1. Run docker system prune -fa to clean stopped containers, unused networks, and images.

Native Network Drivers

Docker provides different types of network drivers to enable flexibility in how containers are deployed. The various network drivers are:

  • bridge: The network that containers use by default. It’s the docker0 interface. Containers will receive IP addresses from the docker0 subnet, and also access to the internet.

  • host: Containers within the host network will have direct access on the underlying host’s network stack. It won’t have an IP, and it will see everything that’s on the host.

  • none: The none network does not provide network connectiviy. It’s completely isolated.

  • macvlan: Containers inside a macvlan network will receive a MAC address, and appear as a physical device on your host network. To be part of a macvlan, a new network should be created, and physical interface like eth0 should be declared as the parent of this network.

Exercise 6.03 - Exploring Docker Networks

In this exercise, we’ll start with the bridge network, and then look into none, host, and macvlan networks.

  1. First, get an idea on how networking is set up in your Docker environment:
root@ubuntu-local:~# ip a s docker0
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:ba:ef:6e:ac brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:baff:feef:6eac/64 scope link
       valid_lft forever preferred_lft forever
  1. Run docker network ls to list the networks available in your environment:
root@ubuntu-local:~# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
8b72e4204643   bridge    bridge    local
67c9f0c3763f   host      host      local
81452ec3851a   none      null      local
  1. Let’s view the details of each network to understand how containers are expected to behave. Starting with bridge:
root@ubuntu-local:~# docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "8b72e4204643089b14013192c20c740abe190cd1489f5f4423c071802e4dd831",
        "Created": "2021-06-19T14:16:48.67505708Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

The scope is local. The subnet is 172.17.0.0/16 , and the gateway is 172.17.0.1. To access external network from a container inside a bridge, the packet will go through the gateway, and then on your host’s gateway.

  1. The host network:
root@ubuntu-local:~# docker network inspect host
[
    {
        "Name": "host",
        "Id": "67c9f0c3763fce327a266f5ce1155f9c7b4f7df46797491f547c1d59891e78fe",
        "Created": "2021-03-15T08:47:59.721895956Z",
        "Scope": "local",
        "Driver": "host",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

The configuration is pretty much empty. The driver is set to host. Containers on the host network will share networking on the underlying host.

  1. Finally the none network:
root@ubuntu-local:~# docker network inspect none
[
    {
        "Name": "none",
        "Id": "81452ec3851a61b3d164f5a0124e6180323116195ad1fd869f04265e02c308a2",
        "Created": "2021-03-15T08:47:59.708592435Z",
        "Scope": "local",
        "Driver": "null",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

Almost identical as the host network, except for the driver which is set to null. Networking is not allowed on this network.

  1. Create a container in the none network to observe its operation. Use docker run to start an Alpine container with the name nonenet in the none network with --network:
root@ubuntu-local:~# docker run -itd --network none --name nonenet alpine
1309945b31f8db07f156880dcee34b0804ef89a992f9b62326d816349131dad0
root@ubuntu-local:~#
root@ubuntu-local:~#
root@ubuntu-local:~# docker ps
CONTAINER ID   IMAGE     COMMAND     CREATED         STATUS         PORTS     NAMES
1309945b31f8   alpine    "/bin/sh"   6 seconds ago   Up 2 seconds             nonenet
root@ubuntu-local:~#
  1. Inspect the container to understand how it is configured:
---REDACTED---
"Networks": {
                "none": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "81452ec3851a61b3d164f5a0124e6180323116195ad1fd869f04265e02c308a2",
                    "EndpointID": "d97a21b293040218312d31a8effec4a71ccbd0fa0e79a3dec7ac2ffd80cf11be",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "",
                    "DriverOpts": null
                }
  1. Get a shell and ping the Google DNS server:
root@ubuntu-local:~# docker exec -it nonenet /bin/sh
/ #
/ # ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
/ #
/ #
/ # ping -c 5 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
ping: sendto: Network unreachable
/ #

It has no network connectivity.

  1. Before you run a container in the host network, find the primary network interface on your host machine:
root@ubuntu-local:~# ip a s ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:85:92:ad brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.104/24 brd 192.168.100.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe85:92ad/64 scope link
       valid_lft forever preferred_lft forever

The primary interface is ens33 with an IP address of 192.168.199.104.

  1. Create an Alpine container in the host network with the name of hostnet1:
root@ubuntu-local:~# docker run -itd --network host --name hostnet1 alpine
daabb2950c1481948ee5906987c997482acab2707ea330377ed8d46c73230b6f
  1. Inspect the container’s network:
"Networks": {
                "host": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "67c9f0c3763fce327a266f5ce1155f9c7b4f7df46797491f547c1d59891e78fe",
                    "EndpointID": "ccdb6929125250a83f2ba24d1af8426634afdd33b62f3e3aff7bcc9d5caf21f2",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "",
                    "DriverOpts": null
                }

No IP address or gateway has been assigned to the container. It shares everything that’s on the host machine directly.

  1. Get a shell and run ifconfig:
root@ubuntu-local:~# docker exec -it hostnet1 /bin/sh
/ #
/ # ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:7A:16:FB:39
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

ens33     Link encap:Ethernet  HWaddr 00:0C:29:85:92:AD
          inet addr:192.168.100.104  Bcast:192.168.100.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe85:92ad/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1067 errors:0 dropped:0 overruns:0 frame:0
          TX packets:702 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:944258 (922.1 KiB)  TX bytes:77654 (75.8 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:104 errors:0 dropped:0 overruns:0 frame:0
          TX packets:104 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:8196 (8.0 KiB)  TX bytes:8196 (8.0 KiB)

The list of network interfaces are identical to my host. Anything available on the network of my host will be available on the container and vice versa.

  1. Exit and creae an NGINX container on the host network with the name of hostnet2:
root@ubuntu-local:~# docker run -itd --network host --name hostnet2 nginx
3d56e34b2bd183108cbd08a6fc8080659c741cd6c1b971d1698278cb776bfaf0
  1. Now navigate to http://localhost or http://localhost:80 from your host machine:
root@ubuntu-local:~# curl -s http://localhost | grep -i title
<title>Welcome to nginx!</title>

It’s available by default on yout host without forwarding or exposing any port.

  1. Now create another NGINX container on the host network with the name of hostnet3:
root@ubuntu-local:~# docker run -itd --network host --name hostnet3 nginx
d4ef7930136b656f46cd980020f0b2fb48d0acb3e759cc9b6a825eb17cfeec9b

If you look at the output of docker ps -a, you’ll see that the container exited:

root@ubuntu-local:~# docker ps -a
CONTAINER ID   IMAGE     COMMAND                  CREATED             STATUS          PORTS     NAMES
d4ef7930136b   nginx     "/docker-entrypoint.…"   20 seconds ago      Exited (1) 15 seconds ago hostnet3
e380320956e1   nginx     "/docker-entrypoint.…"   About an hour ago   Up About an hour          hostnet2
daabb2950c14   alpine    "/bin/sh"                2 hours ago         Up 2 hours                hostnet1

To understand why this is the case, run docker logs:

root@ubuntu-local:~# docker logs hostnet3
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/06/20 12:57:58 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
2021/06/20 12:57:58 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
2021/06/20 12:57:58 [notice] 1#1: try again to bind() after 500ms
2021/06/20 12:57:58 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)

The second instance of NGINX was unable to bind to port 80 because it was already in use.

  1. The next type of network is macvlan. In a macvlan network, Docker will allocate a MAC address to a container and make it appear as a physical host. It can run in bridge mode which uses a host interface as parent or in 802.1Q trunk mode.

  2. The primary interface on my host is ens33. I will specify this interface as the parent and specify the same subnet on the new macvlan network:

root@ubuntu-local:~# docker network create -d macvlan --subnet=192.168.100.0/24 --gateway=192.168.100.1 -o parent=ens33 macvlan-net1

4631d06bb0ce2d0f20d1d2eccc52ff4cf03bf68f4b29caf70332cf6231315a50
  • -d: to specify a driver.
  • -o: to set driver options.
  1. Run docker network ls to confirm that the network has been created:
NETWORK ID     NAME           DRIVER    SCOPE
8e7da7752336   bridge         bridge    local
67c9f0c3763f   host           host      local
4631d06bb0ce   macvlan-net1   macvlan   local
81452ec3851a   none           null      local
  1. Inspect the defined network to confirm the configuration:
root@ubuntu-local:~# docker network inspect macvlan-net1
[
    {
        "Name": "macvlan-net1",
        "Id": "4631d06bb0ce2d0f20d1d2eccc52ff4cf03bf68f4b29caf70332cf6231315a50",
        "Created": "2021-06-20T13:29:19.706941864Z",
        "Scope": "local",
        "Driver": "macvlan",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.100.0/24",
                    "Gateway": "192.168.100.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "parent": "ens33"
        },
        "Labels": {}
    }
]

subnet, gateway, and parent are set.

  1. Create an Alpine container in the macvlan-net1 network:
root@ubuntu-local:~# docker run -itd --network macvlan-net1 --name macvlan1 alpine
9ea6c0de2f64c85f246265e2e77c7ce17e9f9838a3ff677f6da45589dcbcfad8
root@ubuntu-local:~#
root@ubuntu-local:~# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED          STATUS          PORTS     NAMES
9ea6c0de2f64   alpine    "/bin/sh"                21 seconds ago   Up 12 seconds             macvlan1
  1. Inspect its network:
"Networks": {
                "macvlan-net1": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": [
                        "9ea6c0de2f64"
                    ],
                    "NetworkID": "4631d06bb0ce2d0f20d1d2eccc52ff4cf03bf68f4b29caf70332cf6231315a50",
                    "EndpointID": "7fb5623a780ce01b1d674dca473a9dc2414efbe9aa6ff7c51ff55d67a84ae5e6",
                    "Gateway": "192.168.100.1",
                    "IPAddress": "192.168.100.2",
                    "IPPrefixLen": 24,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:c0:a8:64:02",
                    "DriverOpts": null
                }

It’s part of the same network as my host with an IP address of 192.168.100.2.

  1. Deploy a second Alpine container on the network:
root@ubuntu-local:~# docker run -itd --network macvlan-net1 --name macvlan2 alpine
0d1429acfec1e55a83eaede090797714e5282ccb17ce9976c3727315ba4b27cc
root@ubuntu-local:~#
root@ubuntu-local:~# docker inspect macvlan2 | grep -i ipaddress | tail -n 1
                    "IPAddress": "192.168.100.3",
root@ubuntu-local:~#

This container has an IP address of 192.168.100.3.

  1. Get an shell inside macvlan1 and run ifconfig:
root@ubuntu-local:~# docker exec -it macvlan1 /bin/sh
/ #
/ #
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:64:02
          inet addr:192.168.100.2  Bcast:192.168.100.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:9 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:566 (566.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ #

It has the same Layer 2 MAC address and Layer 3 IP address as you saw in docker inspect.

  1. Install arping:
/ # apk add arping
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/3) Installing libnet (1.2-r0)
(2/3) Installing libpcap (1.10.0-r0)
(3/3) Installing arping (2.21-r1)
Executing busybox-1.33.1-r2.trigger
OK: 6 MiB in 17 packages
/ #

It’s a tool to lookup MAC addresses and check Layer 2 connectivity.

  1. Check the MAC addresses and connectivity of macvlan2:
/ # arping 192.168.100.3
ARPING 192.168.100.3
42 bytes from 02:42:c0:a8:64:03 (192.168.100.3): index=0 time=69.795 usec
42 bytes from 02:42:c0:a8:64:03 (192.168.100.3): index=1 time=21.414 usec
42 bytes from 02:42:c0:a8:64:03 (192.168.100.3): index=2 time=23.019 usec
42 bytes from 02:42:c0:a8:64:03 (192.168.100.3): index=3 time=21.759 usec
42 bytes from 02:42:c0:a8:64:03 (192.168.100.3): index=4 time=20.553 usec
42 bytes from 02:42:c0:a8:64:03 (192.168.100.3): index=5 time=20.121 usec
42 bytes from 02:42:c0:a8:64:03 (192.168.100.3): index=6 time=21.729 usec
42 bytes from 02:42:c0:a8:64:03 (192.168.100.3): index=7 time=21.325 usec
42 bytes from 02:42:c0:a8:64:03 (192.168.100.3): index=8 time=63.737 usec
42 bytes from 02:42:c0:a8:64:03 (192.168.100.3): index=9 time=21.786 usec
^C
--- 192.168.100.3 statistics ---
10 packets transmitted, 10 packets received,   0% unanswered (0 extra)
rtt min/avg/max/std-dev = 0.020/0.031/0.070/0.018 ms
/ #

Everything looks good. You can confirm this by looking at the stats of RX bytes(bytes received), and TX bytes(bytes transmitted):

/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:64:02
          inet addr:192.168.100.2  Bcast:192.168.100.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1558 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1067 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:2444079 (2.3 MiB)  TX bytes:74809 (73.0 KiB)
  1. Exit, stop all running containers and clean the system.

Overlay Networking

Overlay networks or private networks are created on top of another network. A Virtual Private Network(VPN) is a type of overlay network., where you access the VPN node to get access to another network.

In Docker, overlay networking is used to create mesh networks in a docker swarm cluster. Devices on the network will have direct connectivity between them. Even if a device goes down, the network won’t break.

Exercise 6.04 - Defining Overlay Networks

In this exercise you’ll use two machines to create a basic Docker swarm cluster. Both machines should be running the same version of docker. You will define overlay networks that will span hosts in a Docker swarm cluster. You will then ensure that containers deployed on separate hosts can talk to one another via the overlay network.

  1. Find out the version on both machines:
# MACHINE 1
root@ubuntu-local:~# docker --version
Docker version 20.10.7, build f0df350


# MACHINE 2
[root@rockylinux ~]# docker --version
Docker version 20.10.7, build f0df350
  1. On Machine 1, run docker swarm init to initialize a Docker swarm cluster:
root@ubuntu-local:~# docker swarm init
Swarm initialized: current node (rm1msbqz129gj0zjm6z6tf1ll) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4tciaodlirtgz6nkp5vkouc4om0w8rk4bt7ql5xs2mbwambuny-dgzzfhj8icbhf7cqgfnqm18ys 192.168.100.104:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

The Swarm is initialized. Machine 1 is now a manager.

  1. Run the command shown above on Machine 2 to join it as a worker:
root@rockylinux ~]# docker swarm join --token SWMTKN-1-4tciaodlirtgz6nkp5vkouc4om0w8rk4bt7ql5xs2mbwambuny-dgzzfhj8icbhf7cqgfnqm18ys 192.168.100.104:2377
This node joined a swarm as a worker.

If you get this error:

Error response from daemon: Timeout was reached before node joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node.

Wait a little bit, and run docker info. If Swarm is still inactive, allow port 2377 on Machine 1, and re run the docker swarm join command.

  1. Run docker info on both machines, and look at the swarm section:
# MACHINE 1

 Swarm: active
  NodeID: rm1msbqz129gj0zjm6z6tf1ll
  Is Manager: true
  ClusterID: ywz8lfy9xtvib1prllcdnr3vk
  Managers: 1
  Nodes: 2
  Default Address Pool: 10.0.0.0/8
  SubnetSize: 24
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 5

# MACHINE 2

 Swarm: active
  NodeID: t8qpc7lcbutowxai28zbsirar
  Is Manager: false
  Node Address: 192.168.100.115
  Manager Addresses:
   192.168.100.104:2377
  1. On Machine 1, create an overlay network with a subnet and gateway that is not in use by any networks on your Docker hosts to avoid subnet collisions. Use 172.45.0.0/16, and 172.45.0.1 as gateway:
root@ubuntu-local:~# docker network create overlaynet1 -d overlay --subnet 172.45.0.0/16 --gateway 172.45.0.1
2c5ovcep6us55dwju40ed8rk3
root@ubuntu-local:~#
root@ubuntu-local:~# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
c9d9c4fd90a9   bridge            bridge    local
3a2bb3dbc501   docker_gwbridge   bridge    local
67c9f0c3763f   host              host      local
qhc81dzr055h   ingress           overlay   swarm
81452ec3851a   none              null      local
2c5ovcep6us5   overlaynet1       overlay   swarm
root@ubuntu-local:~#

As you can see, the scope of the network is automatically set to swarm.

  1. Now, instead of a container, you’ll deploy a container as a service that will run across all nodes in the cluster. To create a service, you’ll use docker service create. To keep it simple, create an Alpine container service with a tty. Name the service alpine-overlay1:
root@ubuntu-local:~# docker service create -t --replicas 1 --network overlaynet1 --name alpine-overlay1 alpine
wnjw2i44k7svi7frn6wqt8op7
overall progress: 1 out of 1 tasks
1/1: running
verify: Service convergeded

Replicas are used to scale container instances across nodes in a cluster for high availability. We only specified 1 replica on each node.

  1. Repeat the same command, but specify alpine-overlay2 as the service name:
root@ubuntu-local:~# docker service create -t --replicas 1 --network overlaynet1 --name alpine-overlay2 alpine
rvz4xhym9bv28zz8ftgqw223t
overall progress: 1 out of 1 tasks
1/1: running
verify: Service converged
  1. Run docker ps and inspect the available container:
root@ubuntu-local:~# docker ps
CONTAINER ID   IMAGE           COMMAND     CREATED              STATUS              PORTS     NAMES
2d7c497590a1   alpine:latest   "/bin/sh"   About a minute ago   Up About a minute             alpine-overlay1.1.ste1u114ndgn05mcdfpgp78hm
root@ubuntu-local:~#
root@ubuntu-local:~# docker inspect alpine-overlay1.1.dzfji7ml0lmpzdv5z2mebcal8
---REDACTED---
"Networks": {
                "overlaynet1": {
                    "IPAMConfig": {
                        "IPv4Address": "172.45.0.3"
                    },
                    "Links": null,
                    "Aliases": [
                        "2d7c497590a1"
                    ],
                    "NetworkID": "2c5ovcep6us55dwju40ed8rk3",
                    "EndpointID": "3c9f86632c148662ccbc07be8775a33585e22420b35308ffd6229cb5999da69c",
                    "Gateway": "",
                    "IPAddress": "172.45.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:2d:00:03",
                    "DriverOpts": null
                }

The IP address is within the overlaynet1 network and has an IP of 172.45.0.3. Only one container is being displayed from docker ps, because docker will intelligently scale containers between nodes available in the Docker swarm cluster. In this case, alpine-overlay1 landed on Machine 1.

  1. Run docker network ls on both machines:
# MACHINE 1
root@ubuntu-local:~# docker network ls | grep overlaynet
2c5ovcep6us5   overlaynet1       overlay   swarm

# MACHINE 2
alpinemachine:~# docker network ls | grep overlaynet
2c5ovcep6us5   overlaynet1       overlay   swarm

overlaynet1 is defined on both machines. Networks created using the overlay driver will make the network available to all nodes in a cluster. An overlay network enabled containers to be deployed by using this network to connect and run across all nodes.

  1. Run docker ps on Machine 2, and inspect the available container:
[root@rockylinux ~]# docker ps
CONTAINER ID   IMAGE           COMMAND     CREATED         STATUS         PORTS     NAMES
10848a01df25   alpine:latest   "/bin/sh"   2 minutes ago   Up 2 minutes             alpine-overlay2.1.pl0vr53f6cfr86wn94yrcwa7i
[root@rockylinux ~]#
[root@rockylinux ~]# docker inspect alpine-overlay2.1.pl0vr53f6cfr86wn94yrcwa7i
---REDACTED---
"Networks": {
                "overlaynet1": {
                    "IPAMConfig": {
                        "IPv4Address": "172.45.0.6"
                    },
                    "Links": null,
                    "Aliases": [
                        "10848a01df25"
                    ],
                    "NetworkID": "2c5ovcep6us55dwju40ed8rk3",
                    "EndpointID": "c3b739d3cddf36f6c5413936f07be6fdab8b26863115b6976d0e12569ecf1d6a",
                    "Gateway": "",
                    "IPAddress": "172.45.0.6",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:2d:00:06",
                    "DriverOpts": null
                }

This container is also part of the overlaynet1 network and has an IP of 172.45.0.6.

  1. Both services are deployed in the same network and are available on different nodes. Docker is using its underlay network to proxy the traffic for the overlay network. Check the network connectivity between the services and attempt a ping from one service to the other. Since both nodes are on the same network, you can use Docker DNS. Get a shell inside the container on Machine 2 and ping the container on Machine 1:
/ # ping alpine-overlay1
ping: bad address 'alpine-overlay1'

The containers were unreachable on both nodes. I cleaned up everything, and created a new overlay network with a subnet of 10.10.0.0, and the gateway set to 10.10.0.1.

My new setup consists of a Ubuntu machine as the manager, and Rocky Linux as the worker.

To make DNS work, allow the following ports on both nodes:

2377/tcp
7946/tcp
7946/udp
4789/udp

Add the following line in /etc/sysconfig/docker for CentOS and /etc/default/docker for Ubuntu:

OPTIONS="--dns=10.10.0.1 --dns-search=example.com --dns-opt=use-vc"

Now ping works:

[root@rockylinux ~]# docker exec -it alpine-overlay2.1.g7a2flc37ir1w4wl64d3sksfn /bin/sh
/ #
/ # ping alpine-overlay1
PING alpine-overlay1 (10.10.0.2): 56 data bytes
64 bytes from 10.10.0.2: seq=0 ttl=64 time=2.586 ms
64 bytes from 10.10.0.2: seq=1 ttl=64 time=0.207 ms
64 bytes from 10.10.0.2: seq=2 ttl=64 time=0.255 ms
64 bytes from 10.10.0.2: seq=3 ttl=64 time=0.213 ms
64 bytes from 10.10.0.2: seq=4 ttl=64 time=0.207 ms
64 bytes from 10.10.0.2: seq=5 ttl=64 time=0.208 ms
^C
--- alpine-overlay1 ping statistics ---
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max = 0.207/0.612/2.586 ms
/ #
  1. Use docker service rm to delete both services from the Machine 1 node:
root@ubuntu-local:~# docker service ls
ID             NAME              MODE         REPLICAS   IMAGE           PORTS
y11tctmdy5hm   alpine-overlay1   replicated   1/1        alpine:latest
mx5uie6ock3r   alpine-overlay2   replicated   1/1        alpine:latest
vgbnr0bcel7t   testserver        replicated   1/1        nginx:latest
root@ubuntu-local:~#
root@ubuntu-local:~# docker service rm alpine-overlay1
alpine-overlay1
root@ubuntu-local:~# docker service rm alpine-overlay2
alpine-overlay2
root@ubuntu-local:~#
  1. Remove the network:
root@ubuntu-local:~# docker network rm overlaynet1
overlaynet1
  1. Execute docker swarm leave on both nodes to destroy and leave the cluster:
# MACHINE 1
root@ubuntu-local:~# docker swarm leave --force
Node left the swarm.

# MACHINE 2
[root@rockylinux ~]# docker swarm leave
Node left the swarm.

Third Party Network Driver

Docker supports custom networking drivers that can be written by users or the ones available from third parties on docker hub.

Third party drivers provide more features such as access to external resources, or defining rules for communication between container applications.

You’ll download the Weave Net driver and create a network on your Docker host. Weave Net provides the ability to create a virtual network that connects containers from multiple hosts and enabled automatic discovery.

Exercise 6.05 - Installing and Configuring the Weave Net Docker Network Driver

In this exercise, you will download and install the Weave Net Docker network driver and deploy it within the Docker swarm cluster you created in the previous exercise. Weave Net is one of the most common and flexible third-party Docker network drivers available. Using Weave Net, very complex networking configurations can be defined to enable maximum flexibility in your infrastructure.

  1. Initialize a swarm cluster on Machine 1, and the join Machine 2 as the worker:
# MACHINE 1
root@ubuntu-local:~# docker swarm init
Swarm initialized: current node (n2umqec48mu29gfaji3b4kp2v) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-2ns7jb617q9umn4yc7c3m01at83cktqlfn39a5jaqwy41j76nf-5wmvfpc4flszcvkonojrkgkxt 192.168.100.104:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.



# MACHINE 2
[root@rockylinux ~]# docker swarm join --token SWMTKN-1-2ns7jb617q9umn4yc7c3m01at83cktqlfn39a5jaqwy41j76nf-5wmvfpc4flszcvkonojrkgkxt 192.168.100.104:2377
This node joined a swarm as a worker.
  1. Install the plugin on Machine 1 with docker plugin install:
root@ubuntu-local:~# docker plugin install store/weaveworks/net-plugin:2.5.2
Plugin "store/weaveworks/net-plugin:2.5.2" is requesting the following privileges:
 - network: [host]
 - mount: [/proc/]
 - mount: [/var/run/docker.sock]
 - mount: [/var/lib/]
 - mount: [/etc/]
 - mount: [/lib/modules/]
 - capabilities: [CAP_SYS_ADMIN CAP_NET_ADMIN CAP_SYS_MODULE]
Do you grant the above permissions? [y/N] y
2.5.2: Pulling from store/weaveworks/net-plugin
Digest: sha256:b968d45872d72ef5f1e674baa61d384db17ef7d6be85338fa24b1d5d4651eb04
808b21b1a419: Complete
Installed plugin store/weaveworks/net-plugin:2.5.2
  1. Repeat the same thing on Machine 2:
[root@rockylinux ~]# docker plugin install store/weaveworks/net-plugin:2.5.2
Plugin "store/weaveworks/net-plugin:2.5.2" is requesting the following privileges:
 - network: [host]
 - mount: [/proc/]
 - mount: [/var/run/docker.sock]
 - mount: [/var/lib/]
 - mount: [/etc/]
 - mount: [/lib/modules/]
 - capabilities: [CAP_SYS_ADMIN CAP_NET_ADMIN CAP_SYS_MODULE]
Do you grant the above permissions? [y/N] y
2.5.2: Pulling from store/weaveworks/net-plugin
Digest: sha256:b968d45872d72ef5f1e674baa61d384db17ef7d6be85338fa24b1d5d4651eb04
808b21b1a419: Complete
Installed plugin store/weaveworks/net-plugin:2.5.2
  1. Create a network on Machine 1. Specify Weave Net as the driver, and the network name as weavenet1. I’ll use the 10.10.0.0/16 subnet and set the gateway to 10.10.0.1:
root@ubuntu-local:~# docker network create weavenet1 --driver store/weaveworks/net-plugin:2.5.2 --subnet 10.10.0.0/16 --gateway 10.10.0.1
wfb7apx921r3syjjy1yl0yaol
  1. Execute docker network ls on Machine 2 to ensure that weavenet1 network is present:
root@ubuntu-local:~# docker network ls
NETWORK ID     NAME              DRIVER                              SCOPE
4fa68e1e7017   bridge            bridge                              local
44dc00e70c72   docker_gwbridge   bridge                              local
67c9f0c3763f   host              host                                local
tjp7qrzt3usw   ingress           overlay                             swarm
81452ec3851a   none              null                                local
wfb7apx921r3   weavenet1         store/weaveworks/net-plugin:2.5.2   swarm
  1. Get a shell inside alpine-weavenet1, and ping alpine-weavenet2:
root@ubuntu-local:~# docker exec -it alpine-weavenet1.1.a6z0614hze6agx161b3gr2ypr /bin/sh
/ #
/ #
/ # ping alpine-weavenet2.1.zkgurxmip4vtfe0i991tr6cjh
PING alpine-weavenet2.1.zkgurxmip4vtfe0i991tr6cjh (10.1.1.2): 56 data bytes
64 bytes from 10.1.1.2: seq=0 ttl=64 time=0.225 ms
64 bytes from 10.1.1.2: seq=1 ttl=64 time=0.256 ms
64 bytes from 10.1.1.2: seq=2 ttl=64 time=0.246 ms
64 bytes from 10.1.1.2: seq=3 ttl=64 time=0.383 ms
64 bytes from 10.1.1.2: seq=4 ttl=64 time=0.233 ms
^C
--- alpine-weavenet2.1.zkgurxmip4vtfe0i991tr6cjh ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.225/0.268/0.383 ms
/ #

With the Weave Net driver, DNS resolution was working without modifying the DNS options like I did for the previous exercise. But the full name should be used, not just alpine-weavenet2.

  1. Try pinging an external network:
/ # ping google.com -c 5
PING google.com (216.58.223.78): 56 data bytes
64 bytes from 216.58.223.78: seq=0 ttl=119 time=43.344 ms
64 bytes from 216.58.223.78: seq=1 ttl=119 time=40.639 ms
64 bytes from 216.58.223.78: seq=2 ttl=119 time=43.160 ms
64 bytes from 216.58.223.78: seq=3 ttl=119 time=42.341 ms
64 bytes from 216.58.223.78: seq=4 ttl=119 time=42.474 ms

--- google.com ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 40.639/42.391/43.344 ms
/ #

The container does have internet access.

  1. Remove both services and weavenet network:
root@ubuntu-local:~# docker service rm alpine-weavenet1
alpine-weavenet1
root@ubuntu-local:~# docker service rm alpine-weavenet2
alpine-weavenet2
root@ubuntu-local:~#
root@ubuntu-local:~#
root@ubuntu-local:~# docker network rm weavenet1
weavenet1
root@ubuntu-local:~

Activity 6.01 - Leveraging Docker Network Drivers

In this activity, you are going to deploy an example container from the Panoramic Trekking application in a Docker bridge network. You will then deploy a secondary container in host networking mode that will serve as a monitoring server and will be able to use curl to verify that the application is running as expected.

Perform the following steps to complete this activity:

  1. Create a custom Docker bridge network with a custom subnet and gateway IP:
root@ubuntu-local:~# docker network create mybridge1 --subnet 172.45.0.0/24 --gateway 172.45.0.1
c35222441d57e9794013e5a03e6a6ef9a3ccb4817c846a2e7f847aeaf656118b
root@ubuntu-local:~# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
249eaa3dd35e   bridge            bridge    local
44dc00e70c72   docker_gwbridge   bridge    local
67c9f0c3763f   host              host      local
c35222441d57   mybridge1         bridge    local
81452ec3851a   none              null      local
  1. Deploy an NGINX web server called webserver1 in that bridge network, exposing forwarding port 80 on the container to port 8080 on the host:
root@ubuntu-local:~# docker run -d --network mybridge1 -p 8080:80 --name webserver1 nginx

c4acdbcac633b93c5d7e6d4f42131d9d3cc5682b5c1c0daa2044df49d210c0d5
  1. Deploy an Alpine Linux container in host networking mode, which will serve as a monitoring container.
root@ubuntu-local:~# docker run -itd --network host --name alpinehost1 alpine
88aea279a1dccc7debc231da2431b651e58d4a57abb64a97aba5e9c0b76373a6

root@ubuntu-local:~# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED          STATUS PORTS      NAMES
88aea279a1dc   alpine    "/bin/sh"                14 seconds ago   Up 12 seconds     alpinehost1
c4acdbcac633   nginx     "/docker-entrypoint.…"   2 minutes ago    Up 2 minutes    0.0.0.0:8080->80/tcp   webserver1
  1. Use the Alpine Linux container to curl(install it with apk add curl) the NGINX web server and get a response. When you connect to both the forwarded port 8080 and the IP address of the webserver1 container directly on port 80, you should get the default page of NGINX
root@ubuntu-local:~# docker exec -it alpinehost1 /bin/sh
/ #
/ # curl 172.45.0.2 -s | grep -i title
<title>Welcome to nginx!</title>
/ #
/ # curl http://localhost:8080 -s | grep -i title
<title>Welcome to nginx!</title>
/ #

Activity 6.02 - Overlay Networking in Action

In this activity, you will revisit the two-node Docker swarm cluster and create services from the Panoramic Trekking application that will connect using Docker DNS between two hosts. In this scenario, different microservices will be running on different Docker swarm hosts but will still be able to leverage the Docker overlay network to directly communicate with each other.

To complete this activity successfully, perform the following steps:

  1. Create a Docker overlay network using a custom subnet and gateway on Machine 1
root@ubuntu-local:~# docker swarm init
Swarm initialized: current node (29ou01ypcto6ofltot826apmh) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-5ecvd8zstjs7x7b3x809olmue4jws23bx70femckhtxj3d8bzl-5piw6r9pr6opl5n04ue2mvwrr 192.168.100.104:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

root@ubuntu-local:~# docker network create panoramic --subnet 10.1.1.0/16 --gateway 10.1.1.1 -d overlay
qcku1f3cis7446rx4nxndlr5d
  1. Initialize Machine 2 as the a worker:
[root@rockylinux ~]# docker swarm join --token SWMTKN-1-5ecvd8zstjs7x7b3x809olmue4jws23bx70femckhtxj3d8bzl-5piw6r9pr6opl5n04ue2mvwrr 192.168.100.104:2377
This node joined a swarm as a worker.
[root@rockylinux ~]#
  1. Create Docker swarm service called trekking-app using an Alpine Linux container on Machine 1
root@ubuntu-local:~# docker service create -t --name trekking --replicas=1 --network panoramic alpine

image alpine:latest could not be accessed on a registry to record
its digest. Each node will access alpine:latest independently,
possibly leading to different nodes running different
versions of the image.

wssfzw0m6es01digxybeudyve
overall progress: 1 out of 1 tasks
1/1: running
verify: Service converged
root@ubuntu-local:~#
  1. Create a database with default credentials and use postgres:12 as the image:
root@ubuntu-local:~# docker service create -t --name database --replicas=1 --network panoramic -e "POSTGRES_USER=panoramic" -e "POSTGRES_PASSWORD=trekking" postgres:12
uf3u09dhqygzwik8fqu5bw1xq
overall progress: 1 out of 1 tasks
1/1: running
verify: Service converged
  1. Get a shell inside trekking and ping database:
[root@rockylinux ~]# docker exec -it trekking.1.qcinn2piaoh2j75xv1okssl05 /bin/sh
/ #
/ # ping database
PING database (10.1.0.4): 56 data bytes
64 bytes from 10.1.0.4: seq=0 ttl=64 time=0.590 ms
64 bytes from 10.1.0.4: seq=1 ttl=64 time=0.210 ms
64 bytes from 10.1.0.4: seq=2 ttl=64 time=0.395 ms
64 bytes from 10.1.0.4: seq=3 ttl=64 time=0.245 ms
64 bytes from 10.1.0.4: seq=4 ttl=64 time=0.218 ms
^C
--- database ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.210/0.331/0.590 ms
/ #

Connectivity was successful between docker swarm services by using Docker DNS.