Linux——Docker网络通信

tech2025-10-19  7

文档中使用的镜像不同,自行选择镜像

Docker

Docker提供了映射容器端口到宿主机和容器互联机制来为容器提供网络服务。

一、Dockerhost单主机网络

Docker网络从覆盖范围可分为单个host上的容器和跨多个host的网络

DOcker的原生网络

[root@docker ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 33fbe0ebf28f bridge bridge local e34526880b84 host host local bf23eec1f232 none null local

指定网络的语法:

--network | none | host |自定义网络 |

1.1 None网络

none 网络就是什么都没有的网络。挂在这个网络下的容器除了 lo,没有其他任何网卡。 当你用到None网络的容器,会发现他只有一个Loopback回环的网络,没有Mac地址、IP等信息,意味着他不能跟外界通信,是被隔离起来的网络,容器创建时,可以通过 --network=none 指定使用 none 网络

[root@docker ~]# docker run -itd --name none --network none busybox:latest e1778e6f353b70f153046f8bc4635d63518ef47beacb087a1604cb8bf423d8ec [root@docker ~]# docker exec -it none /bin/sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever

None网络的使用场景:

封闭意味着隔离,一些对安全性要求高并且不需要联网的应用可以使用 none 网络。比如某个容器的唯一用途是生成随机密码,就可以放到 none 网络中避免密码被窃取。当然大部分容器是需要网络的 ,嘿嘿嘿!!

1.2 Host网络

Host网络是在启动容器的时候使用 host 模式,那么这个容器将不会获得一个独立的Network Namespace ,而是和宿主机共用一个 Network Namespace。容器将不会虚拟出自己的网卡,配置自己的 IP 等,而是使用宿主机的 IP 和端口。但是,容器的其他方面,如文件系统、进程列表等还是和宿主机隔离的。

[root@docker ~]# docker run -itd --name host --network host busybox:latest cc1f2488f734e1350be60982e710aa46694f690b189dfd54041bd2b74d3be436 [root@docker ~]# docker exec -it host /bin/sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0c:29:1b:88:f6 brd ff:ff:ff:ff:ff:ff inet 192.168.1.40/24 brd 192.168.1.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::84f5:b792:ed69:b569/64 scope link valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue qlen 1000 link/ether 52:54:00:70:12:c0 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 qlen 1000 link/ether 52:54:00:70:12:c0 brd ff:ff:ff:ff:ff:ff 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue link/ether 02:42:3f:4b:f1:13 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever

Host网络的使用场景:

直接使用 Docker host 的网络最大的好处就是性能,如果容器对网络传输效率有较高要求,则可以选择 host 网络。当然不便之处就是牺牲一些灵活性,比如要考虑端口冲突问题,Docker host 上已经使用的端口就不能再用了

1.3 Brdige网络 (桥接网络)

当Docker 进程启动时,会在主机上创建一个名为 docker0 的虚拟网桥,此主机上启动的 Docker 容器会连接到这个虚拟网桥上。虚拟网桥的工作方式和物理交换机类似,这样主机上的所有容器就通过交换机连在了一个二层网络中。从 docker0 子网中分配一个 IP 给容器使用,并设置 docker0 的iP 地址为容器的默认网关。在主机上创建一对虚拟网卡 veth pair 设备,Docker 将 veth pair 设备的一端放在新创建的容器中,并命名为 eth0 (容器的网卡),另一端放在主机中,以 vethxxx 这样类似的名字命名,并将这个网络设备加入到 docker0 网桥中。可以通过 brctl show 命令查看 [root@docker ~]# brctl show bridge name bridge id STP enabled interfaces docker0 8000.02423f4bf113 no virbr0 8000.5254007012c0 yes virbr0-nic

bridge 模式是 docker 的默认网络模式,不写–net 参数,就是 bridge 模式。使用 docker run -p 时,docker 实际是在 iptables 做了 DNAT 规则,实现端口转发功能。可以使用 iptables -t nat -vnL 查看。

[root@docker ~]# iptables -t nat -vnL Chain PREROUTING (policy ACCEPT 5 packets, 497 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 5 packets, 497 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 87 packets, 6774 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 87 packets, 6774 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 2 269 RETURN all -- * * 192.168.122.0/24 224.0.0.0/24 0 0 RETURN all -- * * 192.168.122.0/24 255.255.255.255 0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 0 0 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24 Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0

docker0: 在我们安装docker这个服务的时候,默认就会生产一张docker0的网卡,一般默认IP为172.17.0.1/16. docker0此时相当于一个路由器,基于此网络的容器,网段都是和docker0一致的。

[root@docker ~]# docker run -itd --name test1 busybox:latest 60182b2680599ee0a6a673f6421c0ce2d21579db3149ef5612961bdfc20eafb7 [root@docker ~]# docker exec -it test1 /bin/sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever

1.4 自定义网络(brdige)

1.4.1 创建一个brdige
[root@docker ~]# docker network create -d bridge my_net1 0adeb3f0a070d2225455e0c7c8b26d8fcbd6c3baa74b247e9a5ae82cb1ccd023 [root@docker ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 33fbe0ebf28f bridge bridge local e34526880b84 host host local 0adeb3f0a070 my_net1 bridge local bf23eec1f232 none null local [root@docker ~]# brctl show bridge name bridge id STP enabled interfaces br-0adeb3f0a070 8000.02429dc976d5 no docker0 8000.02423f4bf113 no veth077790e virbr0 8000.5254007012c0 yes virbr0-nic
1.4.2 运行容器通信

PS:自定义网络的优点是执行ping命令,可以通过容器的名称通信。

[root@docker ~]# docker run -itd --name test3 --network my_net1 busybox:latest fb520b6dcc8c5c9dd5ed9f1aaa6688371c11729ea0b2ec73402853bad5722d61 [root@docker ~]# docker run -itd --name test4 --network my_net1 busybox:latest b062d61731633a891fa793f318df39a34da71f2a7851852eacb68f831e7b00ca [root@docker ~]# docker exec -it test3 /bin/sh / # ping test4 PING test4 (172.18.0.3): 56 data bytes 64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.073 ms 64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.066 ms 64 bytes from 172.18.0.3: seq=2 ttl=64 time=0.073 ms ^C --- test4 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.066/0.070/0.073 ms
1.4.3 自定义网络

自己定义网络,指定网关和网段

[root@docker ~]# docker network create -d bridge --subnet 192.168.2.0/24 --gateway 192.168.2.1 my_net2 90dc980ea9f6e34aa781d37f887b70810f729190c7b4469359cb658b5fa40245 [root@docker ~]# docker run -itd --name testA --network my_net2 --ip 192.168.2.2 busybox:latest dafa42495452649013ae8f879a83928eb61f92c3a58807e710bd06519770133d [root@docker ~]# docker run -itd --name testB --network my_net2 --ip 192.168.2.3 busybox:latest 1d4dc0c1ef87adc8728a91c01c1b7a97ce05a7d9bc2a76e379dd4dadaf5e86e5 [root@docker ~]# docker exec testA -it /bin/sh OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"-it\": executable file not found in $PATH": unknown [root@docker ~]# docker exec -it testA /bin/sh / # ping testB PING testB (192.168.2.3): 56 data bytes 64 bytes from 192.168.2.3: seq=0 ttl=64 time=0.070 ms 64 bytes from 192.168.2.3: seq=1 ttl=64 time=0.064 ms 64 bytes from 192.168.2.3: seq=2 ttl=64 time=0.072 ms ^C --- testB ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.064/0.068/0.072 ms

扩展

之上创建的容器是:

test1和test2是docker0的桥接网络 网段: 172.17.0.0test3和test4是my_net1网络 网段: 172.18.0.0testA和testB是my_net2网络 网段:192.168.2.0 [root@docker ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1d4dc0c1ef87 busybox:latest "sh" 6 minutes ago Up 6 minutes testB dafa42495452 busybox:latest "sh" 6 minutes ago Up 6 minutes testA b062d6173163 busybox:latest "sh" 11 minutes ago Up 11 minutes test4 fb520b6dcc8c busybox:latest "sh" 11 minutes ago Up 11 minutes test3 f834b2841d5b busybox:latest "sh" 2 seconds ago Up 1 second test2 60182b268059 busybox:latest "sh" 19 minutes ago Up 19 minutes test1

查看容器Ip的命令

[root@docker ~]# docker inspect test3 | grep IPAddress "SecondaryIPAddresses": null, "IPAddress": "", "IPAddress": "172.18.0.2",

查看网段的命令:

[root@docker ~]# docker network inspect my_net1 [ { "Name": "my_net1", "Id": "0adeb3f0a070d2225455e0c7c8b26d8fcbd6c3baa74b247e9a5ae82cb1ccd023", "Created": "2020-09-04T11:39:59.297034202+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "b062d61731633a891fa793f318df39a34da71f2a7851852eacb68f831e7b00ca": { "Name": "test4", "EndpointID": "4afcb187b65e1405343f2a17f2ea196ee329e6a52a41e657923c78f707c40517", "MacAddress": "02:42:ac:12:00:03", "IPv4Address": "172.18.0.3/16", "IPv6Address": "" }, "f834b2841d5bdb96332049455468f24584ee11aeb261944aaa0947f707ae4b3f": { "Name": "test2", "EndpointID": "20bd2e092dfeadd0d730a6c9cb32b362f6f1bb754d1ac936cbceb90ba59a5610", "MacAddress": "02:42:ac:12:00:04", "IPv4Address": "172.18.0.4/16", "IPv6Address": "" }, "fb520b6dcc8c5c9dd5ed9f1aaa6688371c11729ea0b2ec73402853bad5722d61": { "Name": "test3", "EndpointID": "1afad173503592755035d8c5a8d7474d9cc6fa79ac9db64558d8e7f88469b04a", "MacAddress": "02:42:ac:12:00:02", "IPv4Address": "172.18.0.2/16", "IPv6Address": "" } }, "Options": {}, "Labels": {} } ]

问题:

test2能ping的通 my_net1网关,但ping不通test3和test4,同理,test4能ping的通my_net2,但ping不通test5和test6。这是因为iptabeles防火墙规则导致的,但不允许为了实现这个功能对防火墙进行操作,因为涉及安全,须谨慎,所以,我们采用给容器添加网卡的方法。

PS:给test2容器内新添加一块网卡,并且让它获取my_net1的网段。

[root@docker ~]# docker network connect my_net1 test2 [root@docker ~]# docker exec -it test2 /bin/sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 22: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever 24: eth1@if25: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:12:00:04 brd ff:ff:ff:ff:ff:ff inet 172.18.0.4/16 brd 172.18.255.255 scope global eth1 valid_lft forever preferred_lft forever / # ping test3 PING test3 (172.18.0.2): 56 data bytes 64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.072 ms 64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.075 ms ^C --- test3 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.072/0.073/0.075 ms

PS:为了安全而考虑,在网卡不同之间需要访问,可以使用这种方法。

1.5 Join容器(container)

container是共享网络协议栈,容器和容器之间共享

这个模式指定新创建的容器和已经存在的一个容器共享一个 NetworkNamespace,而不是和宿主机共享。新创建的容器不会创建自己的网卡,配置自己的 IP,而是和一个指定的容器共享 IP、端口范围等。同样,两个容器除了网络方面,其他的如文件系统、进程列表等还是隔离的。两个容器的进程可以通过 lo 网卡设备通信。 语法:

--network container:共享的容器名 [root@docker ~]# docker run -itd --name web1 busybox:latest d952d665a823c1e055d0abae532e3c2ea6ef19348016063de1ef5fcd779fdbb8 [root@docker ~]# docker run -itd --name web2 --network container:web1 busybox:latest 77ca29f22519e3516c40a4752927a3cb379cf884dc89c25864e258a043ffee1f [root@docker ~]# docker exec -it web1 /bin/sh / # echo 123123123 > /tmp/index.html / # httpd -h /tmp/ / # exit [root@docker ~]# docker exec -it web2 /bin/sh / # wget -O - -q 127.0.0.1 #O是大写O 123123123

PS:wget -O - -q 127.0.0.1 #O是大写O

Join容器的使用场景:

由于这种网络的特殊性,一般在运行同一个服务,并且合格服务需要 做监控,已经日志收集、或者网络监控的时候,可以选择这种网络。

二、端口映射

容器启动后,如果不知道对应的端口,容器外是无法通过网络来访问容器内的服务的。Docker提供端口映射机制来将容器内的服务提供给外部网络访问,实质上就是将宿主机的端口映射到容器中,使得外部网络访问宿主机的端口可访问容器内的服务。

实现端口映射的参数:

-P:随机指定一个端口范围在49000-49900,但不是绝对的,也有例外情况不会映射到这个范围(大写P)-p:指定要映射的端口(小写p)

首先 这个httpd:1.10镜像必须是 拥有nginx或者apache的一个服务

2.1 随机端口

[root@docker ~]# docker run -itd --name nginx1 -P httpd:1.10 277f2eba703e6174b403ea3218bfca7a10b8ac074e7d8b27accfa4c4db11033d

2.2 指定端口

手动指定端口映射关系

[root@docker ~]# docker run -d --name nginx2 -p 90:80 httpd:1.10 ce9c18b17d4fc026186526d0d111fc7a8ffab99c6143b3354e14c6efc520d765

从宿主机随机映射端口到容器

[root@docker ~]# docker run -d --name nginx3 -p 80 httpd:1.10 27495ba4bd10161410414947191a33459924ddbee7adc8d9ff036fca29e551ed

每一个映射的端口,host 都会启动一个 docker-proxy 进程来处理访问容器的流量

[root@docker ~]# ps -ef | grep docker-proxy root 7215 2214 0 12:13 ? 00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.17.0.6 -container-port 80 root 7298 2214 0 12:13 ? 00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 32768 -container-ip 172.17.0.7 -container-port 80 root 7416 2214 0 12:15 ? 00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 90 -container-ip 172.17.0.8 -container-port 80

三、容器互联

容器互联是通过容器的名称在容器间建立—条专门的网络通信隧道从而实现的互联。

语法:

--link name : alias

其中name是要链接的容器名称,alias是这个链接的别名。

3.1 创建原容器

[root@docker ~]# docker run -itd --name As docker.io/centos 1ae17d1eb0ccdb0b8d70e36b8352532847cd4e6ba368374e766517d1430b2245

3.2 创建接收容器

–link指定链接容器已实现容器互联

[root@docker ~]# docker run -itd --name Bs --link As:As docker.io/centos e70de2e4d7945c6d431982306d45f3352f945726af9b5d78c666059c172fb22a

3.3 测试容器互联

[root@docker ~]# docker exec -it Bs /bin/bash [root@e70de2e4d794 /]# ping As PING As (172.17.0.7) 56(84) bytes of data. 64 bytes from As (172.17.0.7): icmp_seq=1 ttl=64 time=0.121 ms 64 bytes from As (172.17.0.7): icmp_seq=2 ttl=64 time=0.047 ms 64 bytes from As (172.17.0.7): icmp_seq=3 ttl=64 time=0.051 ms 64 bytes from As (172.17.0.7): icmp_seq=4 ttl=64 time=0.051 ms 64 bytes from As (172.17.0.7): icmp_seq=5 ttl=64 time=0.054 ms 64 bytes from As (172.17.0.7): icmp_seq=6 ttl=64 time=0.051 ms 64 bytes from As (172.17.0.7): icmp_seq=7 ttl=64 time=0.051 ms ^C --- As ping statistics --- 7 packets transmitted, 7 received, 0% packet loss, time 6ms rtt min/avg/max/mdev = 0.047/0.060/0.121/0.026 ms

此时,可以看到As和Bs已经建立互联关系,这时Docker在两个容器之间创建了一条安全隧道,而且不用映射他们的端口到宿主机上,从而避免了暴露端口到外部网络。

四、Docker跨主机网络

1 Overlay网络

为支持容器跨主机通信,Docker 提供了 overlay driver,使用户可以创建基于 VxLAN 的 overlay 网络。VxLAN 可将二层数据封装到 UDP 进行传输,VxLAN 提供与 VLAN 相同的以太网二层服务,但是拥有更强的扩展性和灵活性。Docerk overlay 网络需要一个 key-value 数据库用于保存网络状态信息,包括 Network、Endpoint、IP 等。Consul、Etcd 和 ZooKeeper 都是Docker 支持的 key-vlaue 软件,我们这里使用 Consul

实验环境:

环境限制:

必须按照key-value存储服务,如consul宿主机已经安装docker engine宿主机的hostname必须不同

环境:

主机IP主机名192.168.1.40docker01192.168.1.41docker02

关闭每一台服务器的防火墙和SELinux,更改主机名。

docker01配置

[root@docker1 ~]# docker pull progrium/consul Using default tag: latest latest: Pulling from progrium/consul c862d82a67a2: Pulling fs layer 0e7f3c08384e: Pulling fs layer 0e221e32327a: Pull complete 09a952464e47: Pull complete 60a1b927414d: Pull complete 4c9f46b5ccce: Pull complete 417d86672aa4: Pull complete b0d47ad24447: Pull complete fd5300bd53f0: Pull complete a3ed95caeb02: Pull complete d023b445076e: Pull complete ba8851f89e33: Pull complete 5d1cefca2a28: Pull complete Digest: sha256:8cc8023462905929df9a79ff67ee435a36848ce7a10f18d6d0faba9306b97274 Status: Downloaded newer image for progrium/consul:latest [root@docker1 ~]# docker run -d -p 8500:8500 -h consul --name consul --restart always progrium/consul -server -bootstrap f3e221286d9d755d8caf8b9275ab720f1230ed62465f63ee7d67e57075fda30d [root@docker1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f3e221286d9d progrium/consul "/bin/start -server …" 3 seconds ago Up 2 seconds 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp consul

容器运行之后,我们通过访问consul服务,验证consul服务是否正常

http://192.168.1.40:8500/ui/#/dc1/services docker2配置

这一步操作docker01和docker02配置一样

[root@docker1 ~]# vim /usr/lib/systemd/system/docker.service 修改添加 ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2376 --cluster-store=consul://192.168.1.40:8500 --cluster-advertise=ens33:2376 [root@docker1 ~]# systemctl daemon-reload [root@docker1 ~]# systemctl restart docker

然后返回浏览器consul服务界面,找到KEY/VALUE----> DOCKER---->NODES,会看到刚刚加入的docker02的信息

案例

实现docker1和docker2通信

docker1

[root@docker1 ~]# docker pull busybox Using default tag: latest latest: Pulling from library/busybox 9c075fe2c773: Pull complete Digest: sha256:c3dbcbbf6261c620d133312aee9e858b45e1b686efbcead7b34d9aae58a37378 Status: Downloaded newer image for busybox:latest #定义一个自定义网格 [root@docker1 ~]# docker network create -d overlay ov_net1 3d6710de1265cde9bc040ae472869335beff8d91677eb9372dd823870992db4e [root@docker1 ~]# docker run -it --name web1 --network ov_net1 busybox:latest / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue link/ether 02:42:0a:00:00:02 brd ff:ff:ff:ff:ff:ff inet 10.0.0.2/24 brd 10.0.0.255 scope global eth0 valid_lft forever preferred_lft forever 15: eth1@if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1 valid_lft forever preferred_lft forever

docker2

[root@docker2 ~]# docker pull busybox Using default tag: latest latest: Pulling from library/busybox 9c075fe2c773: Pull complete Digest: sha256:c3dbcbbf6261c620d133312aee9e858b45e1b686efbcead7b34d9aae58a37378 Status: Downloaded newer image for busybox:latest [root@docker2 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 200f160f51b7 bridge bridge local a434051856e2 host host local 15048de267b2 none null local 3d6710de1265 ov_net1 overlay global [root@docker2 ~]# docker run -it --name web2 --network ov_net1 busybox:latest / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue link/ether 02:42:0a:00:00:03 brd ff:ff:ff:ff:ff:ff inet 10.0.0.3/24 brd 10.0.0.255 scope global eth0 valid_lft forever preferred_lft forever 11: eth1@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1 valid_lft forever preferred_lft forever / # ping web1 PING web1 (10.0.0.2): 56 data bytes 64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.854 ms 64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.545 ms 64 bytes from 10.0.0.2: seq=2 ttl=64 time=0.520 ms 64 bytes from 10.0.0.2: seq=3 ttl=64 time=0.541 ms ^C --- web1 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.520/0.615/0.854 ms

2 MacVlan网络

macvlan 本身是 Linux kernel 模块,其功能是允许在同一个物理网卡上配置多个 MAC 地址,即多个 interface,每个 interface 可以配置自己的 IP。macvlan 本质上是一种网卡虚拟化技术,Docker 用 macvlan 实现容器网络就不奇怪了。macvlan 的最大优点是性能极好,相比其他实现,macvlan 不需要创建Linux bridge,而是直接通过以太 interface 连接到物理网络

环境:

主机IP主机名192.168.1.40docker1192.168.1.41docker2

关闭每一台防火墙和SELinux更改主机名

1.MacVlan单网络通信

【1】打开网卡的混杂模式

docker1和docker2都进行操作

[root@docker1 ~]# ip link set ens33 promisc on [root@docker1 ~]# ip link show ens33 2: ens33: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:1b:88:f6 brd ff:ff:ff:ff:ff:ff

【2】创建MacVlan网络

[root@docker ~]# docker network create -d macvlan --subnet 172.22.16.0/24 --gateway 172.22.16.1 -o parent=ens33 mac_net1 ccb52f7b8147feb56a0251115fc0b369fe518dd91799499b7c22007e9d5d150f

PS: -o parent=绑定在哪张网卡之上

【3】创建基于的MacVlan网络运行一个容器

[root@docker ~]# docker run -itd --name AA --ip 172.22.16.10 --network mac_net1 busybox:latest b62c4b1e38d82c30d296857098b3445a7057c85575ab6e6e00afab272f2cf47a

【4】在docker02上创建macvlan网络,注意与docker01上的macvlan网络一模一样,并创建容器通信

[root@docker2 ~]# docker network create -d macvlan --subnet 172.22.16.0/24 --gateway 172.22.16.1 -o parent=ens33 mac_net1 f82483152a0873af4f8df49bda18ae2347bfca08829d222538d797db2bcbe55b [root@localhost ~]# docker run -it --name BB --network mac_net1 --ip 172.22.16.20 busybox:latest / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 13: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:16:10:14 brd ff:ff:ff:ff:ff:ff inet 172.22.16.20/24 brd 172.22.16.255 scope global eth0 valid_lft forever preferred_lft forever / # ping 172.22.16.10 PING 172.22.16.10 (172.22.16.10): 56 data bytes 64 bytes from 172.22.16.10: seq=0 ttl=64 time=0.591 ms 64 bytes from 172.22.16.10: seq=1 ttl=64 time=0.419 ms 64 bytes from 172.22.16.10: seq=2 ttl=64 time=0.364 ms ^C --- 172.22.16.10 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.364/0.458/0.591 ms
2.MacVlan多网络通信

Docker1配置

【1】验证内核模块8021q封装

docker1和docker2都进行操作

[root@docker ~]# modinfo 8021q filename: /lib/modules/3.10.0-1127.el7.x86_64/kernel/net/8021q/8021q.ko.xz version: 1.8 license: GPL alias: rtnl-link-vlan retpoline: Y rhelversion: 7.8 srcversion: 1DD872AF3C7FF7FFD5B14D5 depends: mrp,garp intree: Y vermagic: 3.10.0-1127.el7.x86_64 SMP mod_unload modversions signer: CentOS Linux kernel signing key sig_key: 69:0E:8A:48:2F:E7:6B:FB:F2:31:D8:60:F0:C6:62:D8:F1:17:3D:57 sig_hashalgo: sha256 #如果内核模块没有开启,运行下面这条命令导入一下 [root@docker ~]# modprobe 8021q

【2】基于ens33创建虚拟网卡

#修改ens33网卡配置文件 [root@docker ~]# cd /etc/sysconfig/network-scripts/ [root@docker network-scripts]# vim ifcfg-ens33 ...... BOOTPROTO="manual" ...... #添加虚拟网卡 [root@docker network-scripts]# cp -p ifcfg-ens33 ifcfg-ens33.10 # -p 保留源文件或目录的属性 [root@docker network-scripts]# vim ifcfg-ens33.10 BOOTPROTO=manual NAME=ens33.10 DEVICE=ens33.10 ONBOOT=yes IPADDR=192.168.10.10 PREFIX=24 GATEWAY=192.168.10.2 VLAN=yes PS:这里注意,IP要和ens33网段做一个区分,保证网关和网段IP的一致性,设备名称和配置文件的一致性,并且打开VLAN支持模式。 #添加第二个虚拟网卡 [root@docker network-scripts]# cp ifcfg-ens33.10 ifcfg-ens33.20 [root@docker network-scripts]# vim ifcfg-ens33.20 BOOTPROTO=manual NAME=ens33.20 DEVICE=ens33.20 ONBOOT=yes IPADDR=192.168.20.20 PREFIX=24 GATEWAY=192.168.20.2 VLAN=yes #启用虚拟网卡 [root@docker network-scripts]# ifup ifcfg-ens33.10 [root@docker network-scripts]# ifup ifcfg-ens33.20 [root@docker network-scripts]# systemctl restart network

【3】基于虚拟网卡,创建MacVlan网络

[root@docker ~]# docker network create -d macvlan --subnet 172.16.10.0/24 --gateway 172.16.10.1 -o parent=ens33.10 mac_net10 90b339c5b06963917503ec2076f77643c9785b3e50bceb673c153446c1abc741 [root@docker ~]# docker network create -d macvlan --subnet 172.16.20.0/24 --gateway 172.16.20.1 -o parent=ens33.20 mac_net20 d211fe08b1e59a4ebcd78439e93484595c7a543655dd3b05e37bfd7eab6fcfef

【4】基于创建的MacVlan网络,运行对应的容器

PS:运行容器与网络对应的网段相符合,IP不可一样。

[root@docker ~]# docker run -itd --name test1 --network mac_net10 --ip 172.16.10.10 busybox:latest b5ae044d34fa3cb48a29a4f72a3b1673813275077cb976c57b3b501352cce777 [root@docker ~]# docker run -itd --name test2 --network mac_net20 --ip 172.16.20.20 busybox:latest 90ef2b7174ae54d737188d797f7cbc3e1a76d25364abc14302887fbd60d7c99f

Docker2配置

PS:步骤1和步骤2 跟docker1一样,修改网卡的时候 注意IP的唯一性。

【1】创建虚拟网卡

#修改ens33网卡配置文件 [root@docker ~]# cd /etc/sysconfig/network-scripts/ [root@docker ~]# scp -p /etc/sysconfig/network-scripts/ifcfg-ens33.* 192.168.1.41:/etc/sysconfig/network-scripts/ [root@docker2 network-scripts]# vim ifcfg-ens33.10 BOOTPROTO=manual NAME=ens33.10 DEVICE=ens33.10 ONBOOT=yes IPADDR=192.168.10.11 PREFIX=24 GATEWAY=192.168.10.2 VLAN=yes [root@docker2 network-scripts]#vim ifcfg-ens33.20 BOOTPROTO=manual NAME=ens33.20 DEVICE=ens33.20 ONBOOT=yes IPADDR=192.168.20.21 PREFIX=24 GATEWAY=192.168.20.2 VLAN=yes [root@docker2 network-scripts]# ifup ifcfg-ens33.10 [root@docker2 network-scripts]# ifup ifcfg-ens33.20 [root@docker2 network-scripts]# systemctl restart network

【2】基于虚拟网卡,创建MacVlan网络

[root@localhost ~]# docker network create -d macvlan --subnet 172.16.10.0/24 --gateway 172.16.10.1 -o parent=ens33.10 mac_net10 9ff022430b00525aefc968c2cef803358982300ff231c4fa300a171e1f021088 [root@localhost ~]# docker network create -d macvlan --subnet 172.16.20.0/24 --gateway 172.16.20.1 -o parent=ens33.20 mac_net20 7567dda0a781b2a0cd81b7574cec966ac787ed818fb99ac09295e3e5bf642d6b

【3】基于创建的MacVlan网络,运行对应的容器

[root@localhost ~]# docker run -itd --name test3 --network mac_net10 --ip 172.16.10.11 busybox:latest 1a5a012b57dac2cc42425952020cbd047797a7240d0987c65fde1d6b9831f224 [root@localhost ~]# docker run -itd --name test4 --network mac_net20 --ip 172.16.20.21 busybox:latest ec1dbd2996f2cf8d3808b65fd87f82a2aefcd364d19e35f05cf15db70af315c7

验证:

[root@docker1 ~]# docker exec -it test1 sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 29: eth0@if27: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:10:0a:0a brd ff:ff:ff:ff:ff:ff inet 172.16.10.10/24 brd 172.16.10.255 scope global eth0 valid_lft forever preferred_lft forever / # ping 172.16.10.11 PING 172.16.10.11 (172.16.10.11): 56 data bytes 64 bytes from 172.16.10.11: seq=0 ttl=64 time=0.376 ms 64 bytes from 172.16.10.11: seq=1 ttl=64 time=0.274 ms 64 bytes from 172.16.10.11: seq=2 ttl=64 time=0.360 ms ^C --- 172.16.10.11 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.274/0.336/0.376 ms / # exit [root@docker1 ~]# docker exec -it test2 sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 30: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:10:14:14 brd ff:ff:ff:ff:ff:ff inet 172.16.20.20/24 brd 172.16.20.255 scope global eth0 valid_lft forever preferred_lft forever / # ping 172.16.20.21 PING 172.16.20.21 (172.16.20.21): 56 data bytes 64 bytes from 172.16.20.21: seq=0 ttl=64 time=0.725 ms ^C --- 172.16.20.21 ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 0.725/0.725/0.725 ms
最新回复(0)