There is a great tool called Pipework (https://github.com/jpetazzo/pipework) that incorporates commands for docker, iproute2, and Linux bridge. It's written in Bash shell.
It automates the process of creating network bridges and virtual ethernet ifaces (veth) on the host as well as additional network interfaces within containers (LXC or Docker).
First let's see what docker images I have available from which I can launch some containers:
[archjun@pinkS310 ~]$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
jess/chromium latest c0aed183c970 4 days ago 567.5 MB
jess/gparted latest 6d1bee229713 7 days ago 212.4 MB
l3iggs/archlinux latest 0ac34c50f830 10 days ago 365.8 MB
busybox latest c51f86c28340 5 weeks ago 1.109 MB
These were all downloaded from Dockerhub using docker pull repoName. I will launch two container instances from the busybox image:
[archjun@pinkS310 ~]$ docker run -ti --rm busybox /bin/sh
An explanation of the option flags (from man docker run):
-i or --interactive
Keep STDIN open even if not attached (i.e. connected to the container).
-t or --tty
Allocate a pseudo-TTY (pty)
--rm
remove the container with docker rm when you exit the container
/bin/sh
Finally you must give a command to run inside the container. The busybox image does not contain bash, but it does have sh.
Let's take a look at the network ifaces inside the busybox container:
/ # ip a
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
7: eth0@if8:
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:2/64 scope link
valid_lft forever preferred_lft forever
Only two ifaces exist, loopback and eth0@if8 which is connected to the bridge iface docker0. Note that the IP is in the range 172.17.x, which is in the default docker settings. Through the docker bridge, containers can communicate with each other. But my local machine has an IP in the range 192.168.10.x, so direct communication with docker containers through docker0 is not yet possible.
I will launch one more busybox container:
[archjun@pinkS310 ~]$ docker run -ti --rm busybox:latest /bin/sh
The second container also has only two network ifaces, one of which is mapped to bridge iface docker0.
/ # ip a
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
9: eth0@if10:
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:3/64 scope link
valid_lft forever preferred_lft forever
From a terminal on my host, let's take a look at the running containers:
[archjun@pinkS310 ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
859be599f53d busybox:latest "/bin/sh" 2 hours ago Up 2 hours drunk_perlman
5140cd8079d4 busybox "/bin/sh" 2 hours ago Up 2 hours stoic_mcnulty
Two busybox containers are running, drunk_perlman and stoic_mcnulty.
The following network ifaces are active on the host:
[archjun@pinkS310 ~]$ ip a show up
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0:
link/ether f8:a9:63:3c:23:64 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.97/24 brd 192.168.10.255 scope global enp1s0
valid_lft forever preferred_lft forever
inet6 fe80::faa9:63ff:fe3c:2364/64 scope link
valid_lft forever preferred_lft forever
3: wlp2s0:
link/ether b8:ee:65:d8:fd:f7 brd ff:ff:ff:ff:ff:ff
inet 192.168.40.195/24 brd 192.168.40.255 scope global wlp2s0
valid_lft forever preferred_lft forever
inet6 fe80::baee:65ff:fed8:fdf7/64 scope link
valid_lft forever preferred_lft forever
4: virbr0:
link/ether 52:54:00:59:95:09 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
6: docker0:
link/ether 02:42:3e:81:64:5d brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:3eff:fe81:645d/64 scope link
valid_lft forever preferred_lft forever
8: vetha4b3346@if7:
link/ether 96:ce:70:36:f9:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::94ce:70ff:fe36:f95e/64 scope link
valid_lft forever preferred_lft forever
10: veth04edbf0@if9:
link/ether 92:01:7a:5b:0b:06 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::9001:7aff:fe5b:b06/64 scope link
valid_lft forever preferred_lft forever
docker0 is the bridge interface created by the docker daemon/systemd service. virbr0 is the bridge iface for use by hypervisors like KVM or Virtualbox. My ethernet interface, enp1s0, has the IP 192.168.10.97/24
Finally, the virtual eth ifaces veth... correspond to the single ports within each of the busybox containers connected to docker0.
Now, using the pipework script run as root, I will create a new bridge interface called br1. Each of the containers will be connected to it through new network ifaces within the two containers.
[archjun@pinkS310 ~]$ sudo pipework br1 drunk_perlman 192.168.10.101/24
[archjun@pinkS310 ~]$ sudo pipework br1 stoic_mcnulty 192.168.10.102/24
Now on the host there are three new ifaces:
[archjun@pinkS310 ~]$ ip a show up
...
11: br1:
link/ether 6e:7c:fc:f6:04:f5 brd ff:ff:ff:ff:ff:ff
inet6 fe80::10c3:eff:fec7:8d89/64 scope link
valid_lft forever preferred_lft forever
13: veth1pl26713@if12:
link/ether 6e:7c:fc:f6:04:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::6c7c:fcff:fef6:4f5/64 scope link
valid_lft forever preferred_lft forever
15: veth1pl23625@if14:
link/ether aa:4f:6d:9c:b9:bc brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::a84f:6dff:fe9c:b9bc/64 scope link
valid_lft forever preferred_lft forever
You can see that the two new veth ifaces have br1 as their master.
Inside each of the containers you can see one new interface with one of the IP addresses specified above (in the IP range 192.168.10.x)
/ # ip a show up
1: lo: mtu 65536 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
7: eth0@if8: mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:2/64 scope link
valid_lft forever preferred_lft forever
14: eth1@if15: mtu 1500 qdisc fq_codel qlen 1000
link/ether 66:5c:ae:2b:26:3a brd ff:ff:ff:ff:ff:ff
inet 192.168.10.102/24 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::645c:aeff:fe2b:263a/64 scope link
valid_lft forever preferred_lft forever
The new iface in container stoic_mcnulty is eth1..., which is connected to bridge br1
Inside container drunk_perlman, you can see a new iface eth1@if13:
/ # ip a show up
1: lo: mtu 65536 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
9: eth0@if10: mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:3/64 scope link
valid_lft forever preferred_lft forever
12: eth1@if13: mtu 1500 qdisc fq_codel qlen 1000
link/ether f2:08:19:49:64:4c brd ff:ff:ff:ff:ff:ff
inet 192.168.10.101/24 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::f008:19ff:fe49:644c/64 scope link
valid_lft forever preferred_lft forever
The new iface was created by pipework and has an IP on the same subnet as localhost.
So far, so good. With just this setup, however, I will be unable to ping the docker containers from my host. Now I must make my Ethernet port enp1s0 into a slave of bridge br1.
[archjun@pinkS310 ~]$ sudo ip l set enp1s0 master br1
[archjun@pinkS310 ~]$ bridge link
2: enp1s0 state UP : mtu 1500 master br1 state forwarding priority 32 cost 19
5: virbr0-nic state DOWN : mtu 1500 master virbr0 state disabled priority 32 cost 100
8: vetha4b3346 state UP @(null): mtu 1500 master docker0 state forwarding priority 32 cost 2
10: veth04edbf0 state UP @(null): mtu 1500 master docker0 state forwarding priority 32 cost 2
13: veth1pl26713 state UP @(null): mtu 1500 master br1 state forwarding priority 32 cost 2
15: veth1pl23625 state UP @(null): mtu 1500 master br1 state forwarding priority 32 cost 2
bridge link (show) displays the current port config and flags for linux bridges. I am not sure why docker0 does not show up in the output, as it is also a bridge iface (running bridge link as root makes no difference). You can see that enp1s0 now has master br1
Since br1 is the master for Ethernet port enp1s0, I have to clear the IP address from enp1s0 and assign it to br1 instead:
[archjun@pinkS310 ~]$ sudo ip a flush enp1s0
[archjun@pinkS310 ~]$ sudo ip a add 192.168.10.97/24 dev br1
Now pinging the containers at 192.168.10.101 and ...102 from the host machine works. The host's wired IP is 192.168.10.97 (the address for br1, which is the master iface for enp1s0).
[archjun@pinkS310 ~]$ ping 192.168.10.101
PING 192.168.10.101 (192.168.10.101) 56(84) bytes of data.
64 bytes from 192.168.10.101: icmp_seq=1 ttl=64 time=0.093 ms
64 bytes from 192.168.10.101: icmp_seq=2 ttl=64 time=0.059 ms
64 bytes from 192.168.10.101: icmp_seq=3 ttl=64 time=0.051 ms
^C
--- 192.168.10.101 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.051/0.067/0.093/0.020 ms
[archjun@pinkS310 ~]$ ping 192.168.10.102
PING 192.168.10.102 (192.168.10.102) 56(84) bytes of data.
64 bytes from 192.168.10.102: icmp_seq=1 ttl=64 time=0.107 ms
64 bytes from 192.168.10.102: icmp_seq=2 ttl=64 time=0.047 ms
^C
--- 192.168.10.102 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.047/0.077/0.107/0.030 ms
Pinging the containers at 101 and 102 from my host machine works. Pinging other machines on the local network also works fine:
[archjun@pinkS310 ~]$ ping 192.168.10.58
PING 192.168.10.58 (192.168.10.58) 56(84) bytes of data.
64 bytes from 192.168.10.58: icmp_seq=1 ttl=64 time=0.817 ms
64 bytes from 192.168.10.58: icmp_seq=2 ttl=64 time=0.448 ms
64 bytes from 192.168.10.58: icmp_seq=3 ttl=64 time=0.483 ms
64 bytes from 192.168.10.58: icmp_seq=4 ttl=64 time=0.447 ms
^C
--- 192.168.10.58 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.447/0.548/0.817/0.158 ms
Now let's see if the containers (I will only show the terminal for one container, since they look identical on the CLI as they don't have unique hostnames) can ping localhost as well as other hosts on the LAN:
/ # ping 192.168.10.97
PING 192.168.10.97 (192.168.10.97): 56 data bytes
64 bytes from 192.168.10.97: seq=0 ttl=64 time=0.104 ms
64 bytes from 192.168.10.97: seq=1 ttl=64 time=0.088 ms
64 bytes from 192.168.10.97: seq=2 ttl=64 time=0.107 ms
^C
--- 192.168.10.97 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.088/0.099/0.107 ms
/ # ping 192.168.10.58
PING 192.168.10.58 (192.168.10.58): 56 data bytes
64 bytes from 192.168.10.58: seq=0 ttl=64 time=0.607 ms
64 bytes from 192.168.10.58: seq=1 ttl=64 time=0.585 ms
64 bytes from 192.168.10.58: seq=2 ttl=64 time=0.543 ms
^C
--- 192.168.10.58 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.543/0.578/0.607 ms
Great! You can see the containers along with other machines on my LAN in zenmap (the GUI for nmap) after a ping scan:
댓글 없음:
댓글 쓰기