Docker Networking
Docker Networking
Next: **[Drivers](02-drivers.md)**
## CNM Driver Interfaces
The Container Networking Model provides two pluggable and open interfaces that can
be used by users, the community, and vendors to leverage additional functionality,
visibility, or control in the network.
- __Bridge__ — The `bridge` driver creates a Linux bridge on the host that is
managed by Docker. By default containers on a bridge will be able to communicate
with each other. External access to containers can also be configured through the
`bridge` driver.
- __MACVLAN__ — The `macvlan` driver uses the MACVLAN bridge mode to establish a
connection between container interfaces and a parent host interface (or sub-
interfaces). It can be used to provide IP addresses to containers that are routable
on the physical network. Additionally VLANs can be trunked to the `macvlan` driver
to enforce Layer 2 container segmentation.
- __Host__ — With the `host` driver, a container uses the networking stack of the
host. There is no namespace separation, and all interfaces on the host can be used
directly by the container.
- __None__ — The `none` driver gives a container its own networking stack and
network namespace but does not configure interfaces inside the container. Without
additional configuration, the container is completely isolated from the host
networking stack.
The `docker network ls` command shows these default Docker networks for a Docker
Swarm:
```
NETWORK ID NAME DRIVER SCOPE
1475f03fbecb bridge bridge local
e2d8a4bd86cb docker_gwbridge bridge local
407c477060e7 host host local
f4zr3zrswlyg ingress overlay swarm
c97909a4b198 none null local
```
| Driver | Description |
|------|------|
| [**contiv**](http://contiv.github.io/) | An open source network plugin led by
Cisco Systems to provide infrastructure and security policies for multi-tenant
microservices deployments. Contiv also provides integration for non-container
workloads and with physical networks, such as ACI. Contiv implements plug-in
network and IPAM drivers. |
| [**weave**](https://www.weave.works/docs/net/latest/introducing-weave/) | A
network plugin that creates a virtual network that connects Docker containers
across multiple hosts or clouds. Weave provides automatic discovery of
applications, can operate on partially connected networks, does not require an
external cluster store, and is operations friendly. |
| [**calico**](https://www.projectcalico.org/) | Calico is an open source
solution for virtual networking in cloud datacenters. It targets datacenters where
most of the workloads(VMs, containers, or bare metal servers) only require IP
connectivity. Calico provides this connectivity using standard IP routing.
Isolation between workloads — whether according to tenant ownership, or any finer
grained policy — is achieved via iptables programming on the servers hosting the
source and destination workloads. |
| [**kuryr**](https://github.com/openstack/kuryr) | A network plugin developed
as part of the OpenStack Kuryr project. It implements the Docker networking
(libnetwork) remote driver API by utilizing Neutron, the OpenStack networking
service. Kuryr includes an IPAM driver as well. |
| Driver | Description |
|------|------|
| [**infoblox**](https://store.docker.com/community/images/infoblox/ipam-driver) |
An open source IPAM plugin that provides integration with existing Infoblox tools.
|
> There are many Docker plugins that exist and more are being created all the time.
Docker maintains a list of the [most common
plugins.](https://docs.docker.com/engine/extend/legacy_plugins/)
The Linux kernel features an extremely mature and performant implementation of the
TCP/IP stack (in addition to other native kernel features like DNS and VXLAN).
Docker networking uses the kernel's networking stack as low level primitives to
create higher level network drivers. Simply put, _Docker networking <b>is</b> Linux
networking._
This implementation of existing Linux kernel features ensures high performance and
robustness. Most importantly, it provides portability across many distributions and
versions which enhances application portability.
There are several Linux networking building blocks which Docker uses to implement
its built-in CNM network drivers. This list includes **Linux bridges**, **network
namespaces**, **veth pairs**, and **iptables**. The combination of these tools
implemented as network drivers provide the forwarding rules, network segmentation,
and management tools for complex networkpolicy.
### iptables
**`iptables`** is the native packet filtering system that has been a part of the
Linux kernel since version 2.4. It's a feature rich L3/L4 firewall that provides
rule chains for packet marking, masquerading, and dropping. The built-in Docker
network drivers utilize `iptables` extensively to segment network traffic, provide
host port mapping, and to mark traffic forload balancing decisions.
This section explains the default Docker bridge network as well as user-defined
bridge networks.
On a standalone Docker host, `bridge` is the default network that containers will
connect to if no other network is specified. In the following example a container
is created with no network parameters. Docker Engine connects it to the `bridge`
network by default. Inside the container we can see `eth0` which is created by the
`bridge` driver and given an address bythe Docker built-in IPAM driver.
```bash
#Create a busybox container named "c1" and show its IP addresses
host$ docker run -it --name c1 busybox sh
c1 # ip address
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 scope global eth0
...
```
> A container interface's MAC address is dynamically generated and embeds the IP
address to avoid collision. Here `ac:11:00:02` corresponds to `172.17.0.2`.
By using the tool `brctl` on the host, we show the Linux bridges that exist in the
host network namespace. It shows a single bridge called `docker0`. `docker0` has
one interface, `vetha3788c4`, which provides connectivity from the bridge to the
`eth0` interface inside container `c1`.
```
host$ brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242504b5200 no vethb64e8b8
```
Inside container `c1` we can see the container routing table that directs traffic
to `eth0` of the container and thus the `docker0` bridge.
```bash
c1# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 src 172.17.0.2
```
A container can have zero to many interfaces depending on how many networks it is
connected to. Each Docker network can only have a single interface per container.
When we peek into the host routing table we can see the IP interfaces in the global
network namespace that now includes `docker0`. The host routing table provides
connectivity between`docker0` and `eth0` on the external network, completing the
path from inside the container to the external network.
```bash
host$ ip route
default via 172.31.16.1 dev eth0
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42.1
172.31.16.0/20 dev eth0 proto kernel scope link src 172.31.16.102
```
By default `bridge` will be assigned one subnet from the ranges 172.[17-31].0.0/16
or 192.168.[0-240].0/20 which does not overlap with any existing host interface.
The default `bridge` network can be also be configured to use user-supplied address
ranges. Also, an existing Linux bridge can be used for the `bridge` network rather
than Docker creating one. Go to the [Docker Engine
docs](https://docs.docker.com/engine/userguide/networking/default_network/custom-
docker0/) for more information about customizing `bridge`.
> The default `bridge` network is the only network that supports legacy [links]
(https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/).
Name-based service discovery and user-provided IP addresses are __not__ supported
by the default `bridge` network.
Below we are creating a user-defined `bridge` network and attaching two containers
to it. We specify a subnet and call the network `my_bridge`. One container is not
given IP parameters, so the IPAM driver assigns it the next available IP in the
subnet. The other container has its IP specified.
```
$ docker network create -d bridge my_bridge
$ docker run -itd --name c2 --net my_bridge busybox sh
$ docker run -itd --name c3 --net my_bridge --ip 10.0.0.254 busybox sh
```
`brctl` now shows a second Linux bridge on the host. The name of the Linux bridge,
`br-4bcc22f5e5b9`, matches the Network ID of the `my_bridge` network. `my_bridge`
also has two `veth` interfaces connected to containers `c2` and `c3`.
```
$ brctl show
bridge name bridge id STP enabled interfaces
br-b5db4578d8c9 8000.02428d936bb1 no vethc9b3282
vethf3ba8b5
docker0 8000.0242504b5200 no vethb64e8b8
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
b5db4578d8c9 my_bridge bridge local
e1cac9da3116 bridge bridge local
...
```
Listing the global network namespace interfaces shows the Linux networking
circuitry that's been instantiated by Docker Engine. Each `veth` and Linux bridge
interface appears as a link between one of the Linux bridges and the container
network namespaces.
```bash
$ ip link
Docker `bridge` networks are not exposed on the external (underlay) host network by
default. Container interfaces are given IPs on the private subnets of the bridge
network. Containers communicating with the external network are port mapped or
masqueraded so that their traffic uses an IP address of the host. The example below
shows outbound and inbound container traffic passing between the host interface and
a user-defined `bridge` network.
This previous diagram shows how port mapping and masquerading takes place on a
host. Container `C2` is connected to the `my_bridge` network and has an IP address
of `10.0.0.2`. When it initiates outbound traffic the traffic will be masqueraded
so that it is sourced from ephemeral port `32768` on the host interface
`192.168.0.2`. Return traffic will use the same IP address and port for its
destination and will be masqueraded internally back to the container address:port
`10.0.0.2:33920`.
Exposed ports can be configured using `--publish` in the Docker CLI or UCP. The
diagram shows an exposed port with the container port `80` mapped to the host
interface on port `5000`.The exposed container would be advertised at
`192.168.0.2:5000`, and all traffic going to this interface:port would be sent to
the container at `10.0.0.2:80`.
The built-in Docker `overlay` network driver radically simplifies many of the
challenges in multi-host networking. With the `overlay` driver, multi-host networks
are first-class citizens inside Docker without external provisioning or components.
`overlay` uses the Swarm-distributed control plane to provide centralized
management, stability, and security across verylarge scale clusters.
> VXLAN has been a part of the Linux kernel since version 3.7, and Docker uses the
native VXLAN features of the kernel to create overlay networks. The Docker overlay
datapath is entirely in kernel space. This results in fewer context switches, less
CPU overhead, and a low-latency, direct traffic path between applications and the
physical NIC.
In this diagram we see the packet flow on an overlay network. Here are the steps
that take place when `c1` sends `c2` packets across their shared overlay network:
- `c1` does a DNS lookup for `c2`. Since both containers are on the same overlay
network the Docker Engine local DNS server resolves `c2` to its overlay IP address
`10.0.0.3`.
- An overlay network is a L2 segment so `c1` generates an L2 frame destined for the
MAC address of `c2`.
- The frame is encapsulated with a VXLAN header by the `overlay` network driver.
The distributed overlay control plane manages the locations and state of each VXLAN
tunnel endpoint soit knows that `c2` resides on `host-B` at the physical address of
`192.168.1.3`. That address becomes the destination address of the underlay IP
header.
- Once encapsulated the packet is sent. The physical network is responsible of
routing or bridging the VXLAN packet to the correct host.
- The packet arrives at the `eth0` interface of `host-B` and is decapsulated by the
`overlay` network driver. The original L2 frame from `c1` is passed to the `c2`'s
`eth0` interface and up to the listening application.
During overlay network creation, Docker Engine creates the network infrastructure
required for overlays on each host. A Linux bridge is created per overlay along
with its associated VXLAN interfaces. The Docker Engine intelligently instantiates
overlay networks on hosts only when a container attached to that network is
scheduled on the host. This prevents sprawl ofoverlay networks where connected
containers do not exist.
```bash
#Create an overlay named "ovnet" with the overlay driver
$ docker network create -d overlay ovnet
#Create a service from an nginx image and connect it to the "ovnet" overlay network
$ docker service create --network ovnet --name container nginx
```
When the overlay network is created, you will notice that several interfaces and
bridges are created inside the host.
```bash
# Run the "ifconfig" command inside the nginx container
$ docker exec -it container ifconfig
#docker_gwbridge network
eth1 Link encap:Ethernet HWaddr 02:42:AC:12:00:04
inet addr:172.18.0.4 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe12:4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648 (648.0 B) TX bytes:648 (648.0 B)
#overlay network
eth2 Link encap:Ethernet HWaddr 02:42:0A:00:00:07
inet addr:10.0.0.7 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:aff:fe00:7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648 (648.0 B) TX bytes:648 (648.0 B)
#container loopback
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:48 errors:0 dropped:0 overruns:0 frame:0
TX packets:48 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4032 (3.9 KiB) TX bytes:4032 (3.9 KiB)
```
Two interfaces have been created inside the container that correspond to two
bridges that now exist on the host. On overlay networks, each container will have
at least two interfaces that connect it to the `overlay` and the `docker_gwbridge`.
| Bridge | Purpose |
|:------:|------|
| **overlay** | The ingress and egress point to the overlay network that VXLAN
encapsulates and (optionally) encrypts traffic going between containers on the same
overlay network. It extends the overlay across all hosts participating in this
particular overlay. One will exist per overlay subnet on a host, and it will have
the same name that a particular overlay network is given. |
| **docker_gwbridge** | The egress bridge for traffic leaving the cluster. Only
one `docker_gwbridge` will exist per host. Container-to-Container traffic is
blocked on this bridge allowing ingress/egress traffic flows only. |
> The Docker Overlay driver has existed since Docker Engine 1.9, and an external
K/V store was required to manage state for the network. Docker 1.12 integrated the
control plane stateinto Docker Engine so that an external store is no longer
required. 1.12 also introduced several new features including encryption and
service load balancing. Networking features thatare introduced require a Docker
Engine version that supports them, and using these features with older versions of
Docker Engine is not supported.
Next: **[MACVLAN](07-macvlan.md)**
## <a name="macvlandriver"></a>MACVLAN
The `macvlan` driver is a new implementation of the tried and true network
virtualization technique. The Linux implementations are extremely lightweight
because rather than using a Linux bridge for isolation, they are simply associated
with a Linux Ethernet interface or sub-interface to enforce separation between
networks and connectivity to the physical network.
The `macvlan` driver uses the concept of a parent interface. This interface can be
a physical interface such as `eth0`, a sub-interface for 802.1q VLAN tagging like
`eth0.10` (`.10` representing `VLAN 10`), or even a bonded host adaptor which
bundle two Ethernet interfaces into a single logical interface.
In this example, we bind a MACVLAN network to `eth0` on the host. We attach two
containers to the `mvnet` MACVLAN network and show that they can ping between
themselves. Each container has an address on the `192.168.0.0/24` physical network
subnet and their default gateway is an interface in the physical network.
```bash
#Creation of MACVLAN network "mvnet" bound to eth0 on the host
$ docker network create -d macvlan --subnet 192.168.0.0/24 --gateway 192.168.0.1 -o
parent=eth0 mvnet
As you can see in this diagram, `c1` and `c2` are attached via the MACVLAN network
called `macvlan` attached to `eth0` on the host.
```bash
#Creation of macvlan10 network that will be in VLAN 10
$ docker network create -d macvlan --subnet 192.168.10.0/24 --gateway 192.168.10.1
-o parent=eth0.10macvlan10
In the preceding configuration we've created two separate networks using the
`macvlan` driver that are configured to use a sub-interface as their parent
interface. The `macvlan` driver creates the sub-interfaces and connects them
between the host's `eth0` and the container interfaces. The host interface and
upstream switch must be set to `switchport mode trunk` so that VLANs are tagged
going across the interface. One or more containers can be connected to a given
MACVLAN network to create complex network policies that are segmented via L2.
> Because multiple MAC addresses are living behind a single host interface you
might need to enable promiscuous mode on the interface depending on the NIC's
support for MAC filtering.
The `host` network driver connects a container directly to the host networking
stack. Containers using the `host` driver reside in the same network namespace as
the host itself. Thus,containers will have native bare-metal network performance at
the cost of namespace isolation.
```bash
#Create a container attached to the host network namespace and print its network
interfaces
$ docker run -it --net host --name c1 busybox ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:19:5F:BC:F7
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
In this example we can see that the host and container `c1` share the same
interfaces. This has some interesting implications. Traffic passes directly from
the container to the host interfaces.
With the `host` driver, Docker does not manage any portion of the container
networking stack such as port mapping or routing rules. This means that common
networking flags like `-p` and `--icc` have no meaning for the `host` driver. They
will be ignored. If the network admin wishes to provide access and policy to
containers then this will have to be self-managed onthe host or managed by another
tool.
Every container using the `host` network will all share the same host interfaces.
This makes `host` ill suited for multi-tenant or highly secure applications. `host`
containers will have access to every other container on the host.
Full host access and no automated policy management may make the `host` driver a
difficult fit as a general network driver. However, `host` does have some
interesting properties that may be applicable for use cases such as ultra high
performance applications, troubleshooting, or monitoring.
Similar to the `host` network driver, the `none` network driver is essentially an
unmanaged networking option. Docker Engine will not create interfaces inside the
container, establishport mapping, or install routes for connectivity. A container
using `--net=none` will be completely isolated from other containers and the host.
The networking admin or external toolsmust be responsible for providing this
plumbing. In the following example we see that a container using `none` only has a
loopback interface and no other interfaces.
```bash
#Create a container using --net=none and display its interfaces
$ docker run -it --net none busybox ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
```
Unlike the `host` driver, the `none` driver will create a separate namespace for
each container. This guarantees container network isolation between any containers
and the host.
- Multicast
- External key-value stores
- Specific routing protocols
- Layer 2 adjacencies between hosts
- Specific topologies such as spine & leaf, traditional 3-tier, and PoD designs.
Any of these topologies are supported.
This is in line with the Container Networking Model which promotes application
portability across all environments while still achieving the performance and
policy required of applications.
Docker uses embedded DNS to provide service discovery for containers running on a
single Docker Engine and `tasks` running in a Docker Swarm. Docker Engine has an
internal DNS server that provides name resolution to all of the containers on the
host in user-defined bridge, overlay, and MACVLAN networks. Each Docker container (
or `task` in Swarm mode) has a DNS resolver that forwards DNS queries to Docker
Engine, which acts as a DNS server. Docker Engine then checks if the DNS query
belongs to a container or `service` on network(s) that the requesting container
belongs to. If it does, then Docker Engine looks up the IP address that matches a
container, `task`, or`service`'s **name** in its key-value store and returns that
IP or `service` Virtual IP (VIP) back to the requester.

```
# Create an overlay network called mynet
$ docker network create -d overlay mynet
a59umzkdj2r0ua7x8jxd84dhr
"VirtualIPs": [
{
"NetworkID": "a59umzkdj2r0ua7x8jxd84dhr",
"Addr": "10.0.0.3/24"
},
]
```
> DNS round robin (DNS RR) load balancing is another load balancing option for
services (configured with `--endpoint-mode`). In DNS RR mode a VIP is not created
for each service. The Docker DNS server resolves a service name to individual
container IPs in round robin fashion.
This is where routing mesh comes into play. Routing mesh is a feature introduced in
Docker 1.12 that combines `ipvs` and `iptables` to create a powerful cluster-wide
transport-layer (L4) load balancer. It allows all the Swarm nodes to accept
connections on the services' published ports. When any Swarm node receives traffic
destined to the published TCP/UDP port of a running `service`, it forwards it to
service's VIP using a pre-defined overlay network called `ingress`. The `ingress`
network behaves similarly to other overlay networks but its sole purpose is to
transport mesh routing traffic from external clients to cluster services. It uses
the same VIP-based internal load balancing as described in the previous section.
Once you launch services, you can create an external DNS record for your
applications and map it to any or all Docker Swarm nodes. You do not need to worry
about where your container is running as all nodes in your cluster look as one with
the routing mesh routing feature.
```
#Create a service with two replicas and export port 8000 on the cluster
$ docker service create --name app --replicas 2 --network appnet -p 8000:80 nginx
```

- A service is created with two replicas, and it is port mapped externally to port
`8000`.
- The routing mesh exposes port `8000` on each host in the cluster.
- Traffic destined for the `app` can enter on any host. In this case the external
LB sends the traffic to a host without a service replica.
- The kernel's IPVS load balancer redirects traffic on the `ingress` overlay
network to a healthy service replica.
Docker allows you to create an isolated network per application using the `overlay`
driver. By default different Docker networks are firewalled from eachother. This
approach provides a true network isolation at Layer 3. No malicious container can
communicate with your application's container unless it's on the same network or
your applications' containers expose services on the host port. Therefore, creating
networks for each applications adds another layer of security. The principles of
"Defense in Depth" still recommends application-level security to protect at L3 and
L7.
Docker Swarm comes with integrated PKI. All managers and nodes in the Swarm have a
cryptographically signed identify in the form of a signed certificate. All manager-
to-manager and manager-to-node control communication is secured out of the box with
TLS. No need to generate certs externally or set up any CAs manually to get end-to-
end control plane traffic secured in Docker Swarm mode. Certificates are
periodically and automatically rotated.
In Docker Swarm mode the data path (e.g. application traffic) can be encrypted out-
of-the-box. This feature uses IPSec tunnels to encrypt network traffic as it leaves
the source container and decrypts it as it enters the destination container. This
ensure that your application traffic is highly secure when it's in transit
regardless of the underlying networks. In a hybrid, multi-tenant, or multi-cloud
environment, it is crucial to ensure data is secure as it traverses networks you
might not have control over.
This diagram illustrates how to secure communication between two containers running
on different hosts in a Docker Swarm.
This feature works with the `overlay` driver in Swarm mode only and can be enabled
per network at the time of creation by adding the `--opt encrypted=true` option
(e.g `docker networkcreate -d overlay --opt encrypted=true <NETWORK_NAME>`). After
the network gets created, you can launch services on that network (e.g `docker
service create --network <NETWORK_NAME> <IMAGE> <COMMAND>`). When two tasks of the
same services are created on two different hosts, an IPsec tunnel is created
between them and traffic gets encrypted as it leaves the source host and gets
decrypted as it enters the destination host.
The Container Networking Model (CNM) provides flexibility in how IP addresses are
managed. There are two methods for IP address management.
- CNM has a built-in IPAM driver that does simple allocation of IP addresses
globally for a cluster and prevents overlapping allocations. The built-in IPAM
driver is what is used by default if no other driver is specified.
- CNM has interfaces to use plug-in IPAM drivers from other vendors and the
community. These drivers can provide integration into existing vendor or self-built
IPAM tools.
Subnet size and design is largely dependent on a given application and the specific
network driver. IP address space design is covered in more depth for each [Network
Deployment Model](#models) in the next section. The uses of port mapping, overlays,
and MACVLAN all have implications on how IP addressing is arranged. In general,
container addressing falls into two buckets. Internal container networks (bridge
and overlay) address containers with IP addresses that are not routable on the
physical network by default. MACVLAN networks provide IP addresses to containers
that are on the subnet of the physical network. Thus, traffic from container
interfaces can be routable on the physical network. It is important to note that
subnets for internal networks (bridge, overlay) should not conflict with the IP
space of the physical underlay network. Overlapping address space can cause traffic
to not reach its destination.
Docker network troubleshooting can be difficult for devops and network engineers.
With proper understanding of how Docker networking works and the right set of
tools, you can troubleshoot and resolve these network issues. One recommended way
is to use the [netshoot](https://github.com/nicolaka/netshoot) container to
troubleshoot network problems. The `netshoot` container has a set of powerful
networking troubleshooting tools that can be used to troubleshoot Docker network
issues.
Back to [Concepts](README.md)
or
On to [Tutorials](../tutorials.md)
## <a name="challenges"></a>Challenges of Networking Containers and Microservices
Microservices practices have increased the scale of applications which has put even
more importance on the methods of connectivity and isolation that we provide to
applications. The Docker networking philosophy is application driven. It aims to
provide options and flexibility to the network operators as well as the right level
of abstraction to the application developers.
Like any design, network design is a balancing act. __Docker Datacenter__ and the
Docker ecosystem provides multiple tools to network engineers to achieve the best
balance for their applications and environments. Each option provides different
benefits and tradeoffs. The remainder of this guide details each of these choices
so network engineers can understand what might be best for their environments.
Docker has developed a new way of delivering applications, and with that,
containers have also changed some aspects of how we approach networking. The
following topics are common design themes for containerized applications:
- __Portability__
- _How do I guarantee maximum portability across diverse network
environments while taking advantage of unique network characteristics?_
- __Service Discovery__
- _How do I know where services are living as they are scaled up and down?_
- __Load Balancing__
- _How do I share load across services as services themselves are brought
up and scaled?_
- __Security__
- _How do I segment to prevent the right containers from accessing each
other?_
- _How do I guarantee that a container with application and cluster control
traffic is secure?_
- __Performance__
- _How do I provide advanced network services while minimizing latency and
maximizing bandwidth?_
- __Scalability__
- _How do I ensure that none of these characteristics are sacrificed when
scaling applications across many hosts?_
1. [Drivers](02-drivers.md)
1. [Bridge Networks](05-bridge-networks.md)
1. [Overlay Networks](06-overlay-networks.md)
1. [MACVLAN](07-macvlan.md)
1. [Security](11-security.md)
1. [Troubleshooting](13-troubleshooting.md)
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||
LABORATORIOS
# Lab Meta
> **Difficulty**: Beginner
In this lab you'll look at the most basic networking components that come with a
fresh installation of Docker.
# Prerequisites
The `docker network` command is the main command for configuring and managing
container networks.
Run a simple `docker network` command from any of your lab machines.
```
$ docker network
Options:
--help Print usage
Commands:
connect Connect a container to a network
create Create a network
disconnect Disconnect a container from a network
inspect Display detailed information on one or more networks
ls List networks
rm Remove one or more networks
The command output shows how to use the command as well as all of the `docker
network` sub-commands. As you can see from the output, the `docker network` command
allows you to create new networks, list existing networks, inspect networks, and
remove networks. It also allows you to connect and disconnect containers from
networks.
Run a `docker network ls` command to view existing container networks on the
current Docker host.
```
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
1befe23acd58 bridge bridge local
726ead8f4e6b host host local
ef4896538cc7 none null local
```
The output above shows the container networks that are created as part of a
standard installation of Docker.
New networks that you create will also show up in the output of the `docker network
ls` command.
You can see that each network gets a unique `ID` and `NAME`. Each network is also
associated with a single driver. Notice that the "bridge" network and the "host"
network have the same name as their respective drivers.
The `docker network inspect` command is used to view network configuration details.
These details include; name, ID, driver, IPAM driver, subnet info, connected
containers, and more.
```
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "1befe23acd58cbda7290c45f6d1f5c37a3b43de645d48de6c1ffebd985c8af4b",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
```
> **NOTE:** The syntax of the `docker network inspect` command is `docker network
inspect <network>`, where `<network>` can be either network name or network ID. In
the example above we are showing the configuration details for the network called
"bridge". Do not confuse this with the "bridge" driver.
The `docker info` command shows a lot of interesting information about a Docker
installation.
Run a `docker info` command on any of your Docker hosts and locate the list of
network plugins.
```
$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.12.3
Storage Driver: aufs
<Snip>
Plugins:
Volume: local
Network: bridge host null overlay <<<<<<<<
Swarm: inactive
Runtimes: runc
<Snip>
```
The output above shows the **bridge**, **host**, **null**, and **overlay** drivers.
# Bridge networking
# Lab Meta
In this lab you'll learn how to build, manage, and use **bridge** networks.
# Prerequisites
```
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
1befe23acd58 bridge bridge local
726ead8f4e6b host host local
ef4896538cc7 none null local
```
The output above shows that the **bridge** network is associated with the *bridge*
driver. It's important to note that the network and the driver are connected, but
they are not the same. In this example the network and the driver have the same
name - but they are not the same thing!
The output above also shows that the **bridge** network is scoped locally. This
means that the network only exists on this Docker host. This is true of all
networks using the *bridge*driver - the *bridge* driver provides single-host
networking.
All networks created with the *bridge* driver are based on a Linux bridge (a.k.a. a
virtual switch).
Install the `brctl` command and use it to list the Linux bridges on your Docker
host.
```
# Install the brctl tools
$ brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242f17f89a6 no
```
The output above shows a single Linux bridge called **docker0**. This is the bridge
that was automatically created for the **bridge** network. You can see that it has
no interfaces currently connected to it.
You can also use the `ip` command to view details of the **docker0** bridge.
```
$ ip a
<Snip>
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
group default
link/ether 02:42:f1:7f:89:a6 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:f1ff:fe7f:89a6/64 scope link
valid_lft forever preferred_lft forever
```
# <a name="connect-container"></a>Step 2: Connect a container
The **bridge** network is the default network for new containers. This means that
unless you specify a different network, all new containers will be connected to the
**bridge** network.
```
$ docker run -dt ubuntu sleep infinity
6dd93d6cdc806df6c7812b6202f6096e43d9a013e56e5e638ee4bfb4ae8779ce
```
This command will create a new container based on the `ubuntu:latest` image and
will run the `sleep` command to keep the container running in the background. As no
network was specified on the `docker run` command, the container will be added to
the **bridge** network.
```
$ brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242f17f89a6 no veth3a080f
```
Notice how the **docker0** bridge now has an interface connected. This interface
connects the **docker0** bridge to the new container just created.
Inspect the **bridge** network again to see the new container attached to it.
```
$ docker network inspect bridge
<Snip>
"Containers": {
"6dd93d6cdc806df6c7812b6202f6096e43d9a013e56e5e638ee4bfb4ae8779ce": {
"Name": "reverent_dubinsky",
"EndpointID":
"dda76da5577960b30492fdf1526c7dd7924725e5d654bed57b44e1a6e85e956c",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
<Snip>
```
The output to the previous `docker network inspect` command shows the IP address of
the new container. In the previous example it is "172.17.0.2" but yours might be
different.
Ping the IP address of the container from the shell prompt of your Docker host.
Remember to use the IP of the container in **your** environment.
```
$ ping 172.17.0.2
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.069 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.052 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.050 ms
64 bytes from 172.17.0.2: icmp_seq=4 ttl=64 time=0.049 ms
64 bytes from 172.17.0.2: icmp_seq=5 ttl=64 time=0.049 ms
^C
--- 172.17.0.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.049/0.053/0.069/0.012 ms
```
Press `Ctrl-C` to stop the ping. The replies above show that the Docker host can
ping the container over the **bridge** network.
```
# Get the ID of the container started in the previous step.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
6dd93d6cdc80 ubuntu "sleep infinity" 5 mins Up reverent_dubinsky
This shows that the new container can ping the internet and therefore has a valid
and working network configuration.
In this step we'll start a new **NGINX** container and map port 8080 on the Docker
host to port 80 inside of the container. This means that traffic that hits the
Docker host on port 8080 will be passed on to port 80 inside the container.
> **NOTE:** If you start a new container from the official NGINX image without
specifying a command to run, the container will run a basic web server on port 80.
Start a new container based off the official NGINX image.
```
$ docker run --name web1 -d -p 8080:80 nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
386a066cd84a: Pull complete
7bdb4b002d7f: Pull complete
49b006ddea70: Pull complete
Digest: sha256:9038d5645fa5fcca445d12e1b8979c87f46ca42cfb17beb1e5e093785991a639
Status: Downloaded newer image for nginx:latest
b747d43fa277ec5da4e904b932db2a3fe4047991007c2d3649e3f0c615961038
```
Check that the container is running and view the port mapping.
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
b747d43fa277 nginx "nginx -g 'daemon off" 3 seconds ago Up
2 seconds 443/tcp, 0.0.0.0:8080->80/tcp web1
6dd93d6cdc80 ubuntu "sleep infinity" About an hour ago
Up About an hour reverent_dubinsky
```
There are two containers listed in the output above. The top line shows the new
**web1** container running NGINX. Take note of the command the container is running
as well as the portmapping - `0.0.0.0:8080->80/tcp` maps port 8080 on all host
interfaces to port 80 inside the **web1** container. This port mapping is what
effectively makes the containers web serviceaccessible from external sources (via
the Docker hosts IP address on port 8080).
Now that the container is running and mapped to a port on a host interface you can
test connectivity to the NGINX web server.
To complete the following task you will need the IP address of your Docker host.
This will need to be an IP address that you can reach (e.g. if your lab is in AWS
this will need to bethe instance's Public IP).
Point your web browser to the IP and port 8080 of your Docker host. The following
example shows a web browser pointed to `52.213.169.69:8080`

If you try connecting to the same IP address on a different port number it will
fail.
If for some reason you cannot open a session from a web broswer, you can connect
from your Docker host using the `curl` command.
```
$ curl 127.0.0.1:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<Snip>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
```
If you try and curl the IP address on a different port number it will fail.
> **NOTE:** The port mapping is actually port address translation (PAT).
# Overlay networking and service discovery
# Lab Meta
In this lab you'll learn how to build, manage, and use an **overlay** network with
a *service* in *Swarm mode*.
# Prerequisites
- Two Linux-based Docker hosts running **Docker 1.12** or higher in Engine mode
(i.e. not yet configured for Swarm mode). You should use **node1** and **node2**
from your lab.
In this step you'll initialize a new Swarm, join a single worker node, and verify
the operations worked.
```
node1$ docker swarm init
Swarm initialized: current node (cw6jpk7pqfg0jkilff5hr8z42) is now a manager.
To add a worker to this swarm, run the following command:
To add a manager to this swarm, run 'docker swarm join-token manager' and
follow the instructions.
```
2. Copy the entire `docker swarm join` command that is displayed as part of the
output from the command.
3. Paste the copied command into the terminal of **node2**.
```
node2$ docker swarm join \
> --token SWMTKN-1-3n2iuzpj8jynx0zd8axr0ouoagvy0o75uk5aqjrn0297j4uaz7-
63eslya31oza2ob78b88zg5xe \
> 172.31.34.123:2377
4. Run a `docker node ls` on **node1** to verify that both nodes are part of the
Swarm.
```
node1$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER
STATUS
4nb02fhvhy8sb0ygcvwya9skr ip-172-31-43-74 Ready Active
cw6jpk7pqfg0jkilff5hr8z42 * ip-172-31-34-123 Ready Active Leader
```
The `ID` and `HOSTNAME` values may be different in your lab. The important
thing to check is that both nodes have joined the Swarm and are *ready* and
*active*.
Now that you have a Swarm initialized it's time to create an **overlay** network.
1. Create a new overlay network called "overnet" by executing the following command
on **node1**.
```
node1$ docker network create -d overlay overnet
0cihm9yiolp0s9kcczchqorhb
```
2. Use the `docker network ls` command to verify the network was created
successfully.
```
node1$ docker network ls
NETWORK ID NAME DRIVER SCOPE
1befe23acd58 bridge bridge local
0ea6066635df docker_gwbridge bridge local
726ead8f4e6b host host local
8eqnahrmp9lv ingress overlay swarm
ef4896538cc7 none null local
0cihm9yiolp0 overnet overlay swarm
```
The new "overnet" network is shown on the last line of the output above. Notice
how it is associated with the **overlay** driver and is scoped to the entire Swarm.
> **NOTE:** The other new networks (ingress and docker_gwbridge) were created
automatically when the Swarm cluster was created.
Notice that the "overnet" network does not appear in the list. This is because
Docker only extends overlay networks to hosts when they are needed. This is usually
when a host runsa task from a service that is created on the network. We will see
this shortly.
4. Use the `docker network inspect` command to view more detailed information about
the "overnet" network. You will need to run this command from **node1**.
```
node1$ docker network inspect overnet
[
{
"Name": "overnet",
"Id": "0cihm9yiolp0s9kcczchqorhb",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Containers": null,
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "257"
},
"Labels": null
}
]
```
Now that you have a Swarm initialized and an overlay network, it's time to create a
service that uses the network.
1. Execute the following command from **node1** to create a new service called
*myservice* on the *overnet* network with two tasks/replicas.
```
node1$ docker service create --name myservice \
--network overnet \
--replicas 2 \
ubuntu sleep infinity
e9xu03wsxhub3bij2tqyjey5t
```
2. Verify that the service is created and both replicas are up.
```
node1$ docker service ls
ID NAME REPLICAS IMAGE COMMAND
e9xu03wsxhub myservice 2/2 ubuntu sleep infinity
```
The `2/2` in the `REPLICAS` column shows that both tasks in the service are up
and running.
3. Verify that a single task (replica) is running on each of the two nodes in the
Swarm.
```
node1$ docker service ps myservice
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
5t4wh...fsvz myservice.1 ubuntu node1 Running Running 2 mins
8d9b4...te27 myservice.2 ubuntu node2 Running Running 2 mins
```
The `ID` and `NODE` values might be different in your output. The important
thing to note is that each task/replica is running on a different node.
4. Now that **node2** is running a task on the "overnet" network it will be able to
see the "overnet" network. Run the following command from **node2** to verify this.
```
node2$ docker network ls
NETWORK ID NAME DRIVER SCOPE
b76635120433 bridge bridge local
ea13f975a254 docker_gwbridge bridge local
73edc8c0cc70 host host local
8eqnahrmp9lv ingress overlay swarm
c4fb141606ca none null local
0cihm9yiolp0 overnet overlay swarm
```
5. Run the following command on **node2** to get more detailed information about
the "overnet" network and obtain the IP address of the task running on **node2**.
```
node2$ docker network inspect overnet
[
{
"Name": "overnet",
"Id": "0cihm9yiolp0s9kcczchqorhb",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Containers": {
"286d2e98c764...37f5870c868": {
"Name": "myservice.1.5t4wh7ngrzt9va3zlqxbmfsvz",
"EndpointID": "43590b5453a...4d641c0c913841d657",
"MacAddress": "02:42:0a:00:00:04",
"IPv4Address": "10.0.0.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "257"
},
"Labels": {}
}
]
```
You should note that as of Docker 1.12, `docker network inspect` only shows
containers/tasks running on the local node. This means that `10.0.0.4` is the IPv4
address of the containerrunning on **node2**. Make a note of this IP address for
the next step (the IP address in your lab might be different than the one shown
here in the lab guide).
To complete this step you will need the IP address of the service task running on
**node2** that you saw in the previous step.
```
node1$ docker network inspect overnet
[
{
"Name": "overnet",
"Id": "0cihm9yiolp0s9kcczchqorhb",
"Scope": "swarm",
"Driver": "overlay",
"Containers": {
"053abaa...e874f82d346c23a7a": {
"Name": "myservice.2.8d9b4i6vnm4hf6gdhxt40te27",
"EndpointID": "25d4d5...faf6abd60dba7ff9b5fff6",
"MacAddress": "02:42:0a:00:00:03",
"IPv4Address": "10.0.0.3/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "257"
},
"Labels": {}
}
]
```
Notice that the IP address listed for the service task (container) running on
**node1** is different to the IP address for the service task running on **node2**.
Note also that they are one the sane "overnet" network.
2. Run a `docker ps` command to get the ID of the service task on **node1** so that
you can log in to it in the next step.
```
node1$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
NAMES
053abaac4f93 ubuntu:latest "sleep infinity" 19 mins ago Up 19 mins
myservice.2.8d9b4i6vnm4hf6gdhxt40te27
<Snip>
```
3. Log on to the service task. Be sure to use the container `ID` from your
environment as it will be different from the example shown below.
```
node1$ docker exec -it 053abaac4f93 /bin/bash
root@053abaac4f93:/#
```
4. Install the ping command and ping the service task running on **node2**.
```
root@053abaac4f93:/# apt-get update && apt-get install iputils-ping
<Snip>
root@053abaac4f93:/#
root@053abaac4f93:/#
root@053abaac4f93:/# ping 10.0.0.4
PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.726 ms
64 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=0.647 ms
^C
--- 10.0.0.4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.647/0.686/0.726/0.047 ms
```
The output above shows that both tasks from the **myservice** service are on
the same overlay network spanning both nodes and that they can use this network to
communicate.
Now that you have a working service using an overlay network, let's test service
discovery.
If you are not still inside of the container on **node1**, log back into it with
the `docker exec` command.
```
root@053abaac4f93:/# cat /etc/resolv.conf
search eu-west-1.compute.internal
nameserver 127.0.0.11
options ndots:0
```
The value that we are interested in is the `nameserver 127.0.0.11`. This value
sends all DNS queries from the container to an embedded DNS resolver running inside
the container listening on 127.0.0.11:53. All Docker container run an embedded DNS
server at this address.
> **NOTE:** Some of the other values in your file may be different to those
shown in this guide.
2. Try and ping the `myservice` name from within the container.
```
root@053abaac4f93:/# ping myservice
PING myservice (10.0.0.2) 56(84) bytes of data.
64 bytes from ip-10-0-0-2.eu-west-1.compute.internal (10.0.0.2): icmp_seq=1
ttl=64 time=0.020 ms
64 bytes from ip-10-0-0-2.eu-west-1.compute.internal (10.0.0.2): icmp_seq=2
ttl=64 time=0.041 ms
64 bytes from ip-10-0-0-2.eu-west-1.compute.internal (10.0.0.2): icmp_seq=3
ttl=64 time=0.039 ms
^C
--- myservice ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.020/0.033/0.041/0.010 ms
```
The output clearly shows that the container can ping the `myservice` service by
name. Notice that the IP address returned is `10.0.0.2`. In the next few steps
we'll verify that this address is the virtual IP (VIP) assigned to the `myservice`
service.
3. Type the `exit` command to leave the `exec` container session and return to the
shell prompt of your **node1** Docker host.
4. Inspect the configuration of the `myservice` service and verify that the VIP
value matches the value returned by the previous `ping myservice` command.
```
node1$ docker service inspect myservice
[
{
"ID": "e9xu03wsxhub3bij2tqyjey5t",
"Version": {
"Index": 20
},
"CreatedAt": "2016-11-23T09:28:57.888561605Z",
"UpdatedAt": "2016-11-23T09:28:57.890326642Z",
"Spec": {
"Name": "myservice",
"TaskTemplate": {
"ContainerSpec": {
"Image": "ubuntu",
"Args": [
"sleep",
"infinity"
]
},
<Snip>
"Endpoint": {
"Spec": {
"Mode": "vip"
},
"VirtualIPs": [
{
"NetworkID": "0cihm9yiolp0s9kcczchqorhb",
"Addr": "10.0.0.2/24"
}
<Snip>
```
Towards the bottom of the output you will see the VIP of the service listed.
The VIP in the output above is `10.0.0.2` but the value may be different in your
setup. The important point to note is that the VIP listed here matches the value
returned by the `ping myservice` command.
Feel free to create a new `docker exec` session to the service task (container)
running on **node2** and perform the same `ping service` command. You will get a
response form the sameVIP.
# HTTP Routing Mesh (HRM)
# Lab Meta
In this lab you'll learn how to configure and use the *HTTP Routing Mesh* with
*Docker Datacenter*.
# Prerequisites
1. Use a web browser to connect to the Login page of your UCP cluster
3. Navigate to `Admin Settings` > `Routing Mesh` and enable the HTTP Routing Mesh
(HRM) on port 80.

Enabling the HRM creates a new *service* called `ucp-hrm` and a new network called
`ucp-hrm`. In this step we'll confirm that both of these constructs have been
created correctly.
1. Navigate to `Resources` > `Networks` and check for the presence of the `ucp-hrm`
network. You may have to `search` for it.

The network shows as an overlay network scoped to the entire Swarm cluster.
2. Navigate to `Resources` > `Services` and click the checkbox to `Show system
services`.

You have now verified that the HRM was configured successfully.
In the next two steps you'll create two services. Each service will based off the
same `ehazlett/docker-demo:latest` image, and runs a web server that counts
containers and requests. You will configure each service with a different number of
tasks and each with a different value in the `TITLE` variable.
In this step you'll create a new service called **RED**, and configure it to use
the HRM.
2. Configure the service as follows (leave all other options as default and
remember to substitute "red.example.com" with the DNS name from your environment):
- Name: `RED`
- Image: `ehazlett/docker-demo:latest`
- Scale: `10`
- Arguments: `-close-conn`
- Published port: Port = `8080/tcp`, Public Port = `5000`
- Attached Networks: `ucp-hrm`
- Labels: `com.docker.ucp.mesh.http` = `8080=http://red.example.com`
- Environment Variables: `TITLE` = `RED`
It will take a few minutes for this service to pull down the image and start.
Continue with the next step to create the **WHITE** service.
In this step you'll create a new service called **WHITE**. The service will be very
similar to the **RED** service created in the previous step.
2. Configure the service as follows (leave all other options as default and
remember to substitute "red.example.com" with the DNS name from your environment):
- Name: `RWHITE`
- Image: `ehazlett/docker-demo:latest`
- Scale: `5`
- Arguments: `-close-conn`
- Published port: Port = `8080/tcp`, Public Port = `5001`
- Attached Networks: `ucp-hrm`
- Labels: `com.docker.ucp.mesh.http` = `8080=http://white.example.com`
- Environment Variables: `TITLE` = `WHITE`
This service will start instantaneously as the image is already pulled on every
host in your UCP cluster.
3. Verify that both services are up and running by clicking `Resources` >
`Services` and checking that both services are running as shown below.

You now have two services running. Both are connected to the `ucp-hrm` network and
both have the `com.docker.ucp.mesh.http` label. The **RED** service is associated
with HTTP requestsfor `red.example.com` and the **WHITE** service is associated
with HTTP requests for `white.example.com`. This mapping of labels to URLs is
leveraged by the `ucp-hrm` service which ispublished on port 80.
> **NOTE: DNS name resolution is required for this step. This can obviously be via
the local hosts file, but this step will not work unless the URLs specified in the
`com.docker.ucp.mesh.http` labels resolve to the UCP cluster nodes (probably via a
load balancer).**
In this step you will use your web browser to issue HTTP requests to
`red.example.com` and `white.example.com`. DNS name resolution is configured so
that these URLs resolve to a load balancer which in turn balances requests across
all nodes in the UCP cluster.
> Remember to substitute `example.com` with the domain supplied by your lab
instructor.

The text below the whale saying "RED" indicates that this request was answered by
the **RED** service. This is because the `TITLE` environment variable for the
**RED** service was configured to display "RED" here. You also know it is the
**RED** service as this was the service configured with 10 replicas (containers).

The output above shows that this request was routed to the **WHITE** service as
it displays "WHITE" below the whale and only has 5 replicas (containers).
Congratulations. You configured two services in the same Swarm (UCP cluster) to
respond to requests on port 80. Traffic to each service is routed based on the URL
included in the `host` field of the HTTP header.
Requests arrive to the Swarm on port 80 and are forwarded to the `ucp-hrm` system
service. The `ucp-hrm` service inspects the HTTP headers of requests and routes
them to the service with the matching `com.docker.ucp.mesh.http` label.
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||
The mechanisms that provide service discovery and load balancing can take many
forms. They can be external service or provided natively within Docker without
extra infrastructure. In D
ocker Swarm Mode automatic service discovery and load balancing is provided right
out of the box. A service can be defined and traffic is load balanced via DNS to
containers. DNS load
balancing is covered later in this guide.
External solutions for service discovery and/or load balancing is also possible and
may be desired to levarage existing infrastructure or to take advantage of special
features. Common
external service discovery mechanisms include Consul, etcd, and Zookeeper. Common
external load balancers include HAproxy, Nginx, F5, and many more.
<br><br>
<br>
<br><br>
<br>
<br>
<br>
In this context we call the physical network (comprised of the host network
adapters and upstream switches & routers) the underlay network. Between port
mapping and overlay tunneling,containers only receive private IP addresses and are
not part of the underlay network. This has many advantages.
* Portability is increased because applications are not tied to the design of the
physical network
* Agility is improved because new networks can be created and destroyed without
having to reconfigure physical infrastrture
There are scenarios where it may be more desirable to place containers directly on
the underlay network so that they receive an IP address that is on the underlay
subnet. These scenarios include:
<br>
<br>
<p align="center">
<img src="./img/app-policy.png" width=70%>
</p>
```
$ docker network create -d bridge --subnet 10.0.0.0/24 --gateway 10.0.0.1 br0
```
<p align="center">
<img src="./img/bridgenat.png" width=100%>
</p>
```
host A
$ docker network create -d bridge --subnet 192.168.1.0/24 --ip-range 192.168.1.0/28
--gateway 192.168.1.100 --aux-address DefaultGatewayIPv4=192.168.1.1 -o
com.docker.network.bridge.name=brnet brnet
host B
$ docker network create -d bridge --subnet 192.168.1.0/24 --ip-range
192.168.1.32/28 --gateway 192.168.1.101 --aux-address
DefaultGatewayIPv4=192.168.1.1 -o com.docker.network.bridge.name=brnet brnet
```
<p align="center">
<img src="./img/bridgetounderlay.png" width=100%>
</p>
<p align="center">
<img src="./img/overlaydef.png" width=100%>
</p>
#####MACVLAN
---------------
####My Questions
- Possible to connect container to two networks in swarm mode?
- How does DNS work when a container is connectected to two different networks?
- Possible to connect both bridge and overlay network to container at same time?
####Management Network
- What Docker engine/swarm/ucp/dtr traffic constitutes as management traffic?
- What are the network policies for docker management traffic?
Notes
No more than 1 bridge when plumbing directly to underlay
Discuss docker events -> service discovery
Mention port mapping with port IP, not insecure because port is only expose on that
IP
Networking Challenges
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
cat tutorials.md
- `DB` specifies the hostname:port or IP:port of the `db` container for the web
front end to use.
- `ROLE` specifies the "tenant" of the application and whether it serves pictures
of dogs or cat.
It consists of `web`, a Python flask container, and `db`, a redis container. Its
architecture and required network policy is described below.
We will run this application on different network deployment models to show how we
can instantiate connectivity and network policy. Each deployment model exhibits
different characteristics that may be advantageous to your application and
environment.
- Bridge Driver
- Overlay Driver
- MACVLAN Bridge Mode Driver
```bash
#Create a user-defined bridge network for our application
$ docker network create -d bridge catnet
#Instantiate the web frontend on the catnet network and configure it to connect to
a container named `cat-db`
$ docker run -d --net catnet -p 8000:5000 -e 'DB=cat-db' -e 'ROLE=cat' chrch/web
```
> When an IP address is not specified, port mapping will be exposed on all
interfaces of a host. In this case the container's application is exposed on
`0.0.0.0:8000`. We can specify a specific IP address to advertise on only a single
IP interface with the flag `-p IP:host_port:container_port`. More options to expose
ports can be found in the [Docker
docs](https://docs.docker.com/engine/reference/run/#/expose-incoming-ports).
The `web` container takes some environment variables to determine which backend it
needs to connect to. Above we supply it with `cat-db` which is the name of our
`redis` service. The Docker Engine's built-in DNS will resolve a container's name
to its location in any user-defined network. Thus, on a network, a container or
service can always be referenced by its name.
With the above commands we have deployed our application on a single host. The
Docker bridge network provides connectivity and name resolution amongst the
containers on the same bridge while exposing our frontend container externally.
```
$ docker network inspect catnet
[
{
"Name": "catnet",
"Id": "81e45d3e3bf4f989abe87c42c8db63273f9bf30c1f5a593bae4667d5f0e33145",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1/16"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"2a23faa18fb33b5d07eb4e0affb5da36449a78eeb196c944a5af3aaffe1ada37": {
"Name": "backstabbing_pike",
"EndpointID":
"9039dae3218c47739ae327a30c9d9b219159deb1c0a6274c6d994d37baf2f7e3",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"dbf7f59187801e1bcd2b0a7d4731ca5f0a95236dbc4b4157af01697f295d4528": {
"Name": "cat-db",
"EndpointID":
"7f7c51a0468acd849fd575adeadbcb5310c5987195555620d60ee3ffad37c680",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
```
In this output, we can see that our two containers have automatically been given ip
addresses from the `172.19.0.0/16` subnet. This is the subnet of the local `catnet`
bridge, and it will provide all connected containers a subnet from this range
unless they are statically configured.
```
host-A $ docker run -d -p 8000:5000 -e 'DB=host-B:8001' -e 'ROLE=cat' --name cat-
web chrch/web
host-B $ docker run -d -p 8001:6379 redis
```
> In this example we don't specify a network to use, so the default Docker `bridge`
network will exist on every host.
In the overlay driver example we will see that multi-host service discovery is
provided out of the box, which is a major advantage of the overlay deployment
model.
In this example we are re-using the previous Pets application. Prior to this
example we already set up a Docker Swarm. For instructions on how to set up a Swarm
read the [Docker docs](https://docs.docker.com/engine/swarm/swarm-tutorial/create-
swarm/). When the Swarm is set up, we can use the `docker service create` command
to create containers and networks that will be managed by the Swarm.
The following shows how to inspect your Swarm, create an overlay network, and then
provision some services on that overlay network. All of these commands are run on a
UCP/swarm controller node.
```bash
#Display the nodes participating in this swarm cluster
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
a8dwuh6gy5898z3yeuvxaetjo host-B Ready Active
elgt0bfuikjrntv3c33hr0752 * host-A Ready Active Leader
> Inside overlay and bridge networks, all TCP and UDP ports to containers are open
and accessible to all other containers attached to the overlay network.
The `dog-web` service is exposed on port `8000`, but in this case the __routing
mesh__ will expose port `8000` on every host in the Swarm. We can test to see if
the application is working by going to `<host-A>:8000` or `<host-B>:8000` in the
browser.
Complex network policies can easily be achieved with overlay networks. In the
following configuration, we add the `cat` tenant to our existing application. This
will represent two applications using the same cluster but requirE network micro-
segmentation. We add a second overlay network with a second pair of `web` and
`redis` containers. We also add an `admin` container that needs to have access to
_both_ tenants.
To accomplish this policy we create a second overlay network, `catnet`, and attach
the new containers to it. We also create the `admin` service and attach it to both
networks.
```
$ docker network create -d overlay --subnet 10.2.0.0/24 --gateway 10.2.0.1 catnet
$ docker service create --network catnet --name cat-db redis
$ docker service create --network catnet -p 9000:5000 -e 'DB=cat-db' -e 'ROLE=cat'
--name cat-web chrch/web
$ docker service create --network dognet --network catnet -p 7000:5000 -e 'DB1=dog-
db' -e 'DB2=cat-db' --name admin chrch/admin
```
- `dog-web` and `dog-db` can communicate with each other, but not with the `cat`
service.
- `cat-web` and `cat-db` can communicate with each other, but not with the `dog`
service.
- `admin` is connected to both networks and has reachability to all containers.
There may be cases where the application or network environment requires containers
to have routable IP addresses that are a part of the underlay subnets. The MACVLAN
driver provides an implementation that makes this possible. As described in the
[MACVLAN Architecture section](#macvlan), a MACVLAN network binds itself to a host
interface. This can be a physical interface, a logical sub-interface, or a bonded
logical interface. It acts as a virtual switch and provides communication between
containers on the same MACVLAN network. Each container receives a unique MAC
address and an IP address of the physical network that the node is attached to.
```bash
#Creation of local macvlan network on both hosts
host-A $ docker network create -d macvlan --subnet 192.168.0.0/24 --gateway
192.168.0.1 -o parent=eth0 macvlan
host-B $ docker network create -d macvlan --subnet 192.168.0.0/24 --gateway
192.168.0.1 -o parent=eth0 macvlan
When `dog-web` communicates with `dog-db`, the physical network will route or
switch the packet using the source and destination addresses of the containers.
This can simplify networkvisibility as the packet headers can be linked directly to
specific containers. At the same time application portability is decreased as
container IPAM is tied to the physical network. Container addressing must adhere to
the physical location of container placement in addition to preventing overlapping
address assignment. Because of this, care must be taken to manage IPAM externally
to a MACVLAN network. Overlapping IP addressing or incorrect subnets can lead to
loss of container connectivity.
## Conclusion
Docker is a quickly evolving technology, and the networking options are growing to
satisfy more and more use cases every day. Incumbent networking vendors, pure-play
SDN vendors, and Docker itself are all contributors to this space. Tighter
integration with the physical network, network monitoring, and encryption are all
areas of much interest and innovation.
This document detailed some but not all of the possible deployments and CNM network
drivers that exist. While there are many individual drivers and even more ways to
configure those drivers, we hope you can see that there are only a few common
models routinely deployed. Understanding the tradeoffs with each model is key to
long term success.