Linux操作文档——Docker网络

tech2026-02-11  2

文章目录

一、网络模式1、None网络2、Host网络3、Bridge: 桥接网络4、自定义网络(brdige)的两种配置方法5、Join容器: container(共享网络协议栈) 二、端口映射1、手动指定端口映射关系2、从宿主机随机映射端口到容器3、从宿主机随机映射端口到容器 三、overlay跨主机网络1、运行consul服务2、修改docker配置文件3、创建自定义网络4、查看所有主机网卡5、根据ov_net网卡自定义网络 四、MacVlan1、MacVlan的单网络通信2、MacVlan的多网络通信


一、网络模式

网络模式说明Host容器将不会虚拟出自己的网卡,配置自己的IP等,而是使用宿主机的IP和端口Bridge为每一个容器分配、设置IP等,并将容器连接到一个docker0虚拟网桥,通过docker0网桥以及Iptables nat表配置与宿主机通信None该模式关闭了容器的网络功能Container创建的容器不会创建自己的网卡,配置自己的IP,而是和一个指定的容器共享IP、端口范围

查看docker原生网络

[root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 72422d3aec0d bridge bridge local cc4585529aa8 host host local 2dd36beaccac none null local

1、None网络

用到None网络的容器,会发现他只有一个Loopback回环的网络,没有Mac地址、IP等信息,意味着他不能跟外界通信,是被隔离起来的网络。

[root@localhost ~]# docker run -itd --name none --network none busybox:latest [root@localhost ~]# docker exec -it none sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever

2、Host网络

用到Host网络的容器,他的网络跟宿主机的网络一模一样,那是因为,在创建这个容器之初,并没有对它的Net网络栈进行隔离,而是直接使用的宿主机的网络栈。

[root@localhost ~]# docker run -itd --name host --network host busybox:latest [root@localhost ~]# docker exec -it host sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0c:29:96:0a:ec brd ff:ff:ff:ff:ff:ff inet 192.168.1.10/24 brd 192.168.1.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::7b62:b4f3:e4e4:d24c/64 scope link valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue qlen 1000 link/ether 52:54:00:ea:b6:ed brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 qlen 1000 link/ether 52:54:00:ea:b6:ed brd ff:ff:ff:ff:ff:ff 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue link/ether 02:42:6e:87:cb:10 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever / # [root@localhost ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:96:0a:ec brd ff:ff:ff:ff:ff:ff inet 192.168.1.10/24 brd 192.168.1.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet6 fe80::7b62:b4f3:e4e4:d24c/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 52:54:00:ea:b6:ed brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000 link/ether 52:54:00:ea:b6:ed brd ff:ff:ff:ff:ff:ff 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:6e:87:cb:10 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever

3、Bridge: 桥接网络

bridge 模式是 docker 的默认网络模式,不写 –net 参数,就是 bridge 模式。容器默认使用的网络是docker0网络,docker0此时相当于一个路由器,基于此网络的容器,网段都是和docker0一致的。docker0的网卡,一般默认IP为172.17.0.1/16。

[root@localhost ~]# docker run -itd --name test1 busybox:latest [root@localhost ~]# docker exec -it test1 sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever / #

4、自定义网络(brdige)的两种配置方法

1、创建自定义网卡

[root@localhost ~]# docker network create -d bridge my_net1 [root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 72422d3aec0d bridge bridge local cc4585529aa8 host host local fdb9503b504d my_net1 bridge local 2dd36beaccac none null local [root@localhost ~]# brctl show bridge name bridge id STP enabled interfaces br-fdb9503b504d 8000.0242d0c62fe3 no docker0 8000.02426e87cb10 no veth42035df virbr0 8000.525400eab6ed yes virbr0-nic

2、基于自定义网卡my_net1运行容器

[root@localhost ~]# docker run -itd --name web1 --network my_net1 busybox:latest [root@localhost ~]# docker run -itd --name web2 --network my_net1 busybox:latest

3、容器之间可以通过IP或者容器名相互通信

[root@localhost ~]# docker exec -it web1 sh / # ping web2 PING web2 (172.18.0.3): 56 data bytes 64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.069 ms 64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.070 ms 64 bytes from 172.18.0.3: seq=2 ttl=64 time=0.111 ms ^C --- web2 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.069/0.083/0.111 ms / #

4、创建自定义网卡并指定网段和网关

[root@localhost ~]# docker network create -d bridge --subnet 172.20.16.0/24 --gateway 172.20.16.1 my_net2

5、基于自定义网卡my_net2运行容器

[root@localhost ~]# docker run -itd --name web3 --network my_net2 --ip 172.20.16.6 busybox:latest [root@localhost ~]# docker run -itd --name web4 --network my_net2 --ip 172.20.16.8 busybox:latest

6、容器之间可以通过IP或者容器名相互通信

[root@localhost ~]# docker exec -it web3 sh / # ping web4 PING web4 (172.20.16.8): 56 data bytes 64 bytes from 172.20.16.8: seq=0 ttl=64 time=0.112 ms 64 bytes from 172.20.16.8: seq=1 ttl=64 time=0.067 ms 64 bytes from 172.20.16.8: seq=2 ttl=64 time=0.064 ms 64 bytes from 172.20.16.8: seq=3 ttl=64 time=0.064 ms ^C --- web4 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.064/0.076/0.112 ms / # ping 172.20.16.8 PING 172.20.16.8 (172.20.16.8): 56 data bytes 64 bytes from 172.20.16.8: seq=0 ttl=64 time=0.170 ms 64 bytes from 172.20.16.8: seq=1 ttl=64 time=0.068 ms 64 bytes from 172.20.16.8: seq=2 ttl=64 time=0.065 ms 64 bytes from 172.20.16.8: seq=3 ttl=64 time=0.074 ms ^C --- 172.20.16.8 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.065/0.094/0.170 ms / #

7、给容器添加与被通讯容器相同的网卡

[root@localhost ~]# docker network connect my_net1 test2

5、Join容器: container(共享网络协议栈)

这个模式指定新创建的容器和已经存在的一个容器共享一个 Network Namespace,而不是和宿主机共享。新创建的容器不会创建自己的网卡,配置自己的 IP,而是和一个指定的容器共享 IP、端口范围等。同样,两个容器除了网络方面,其他的如文件系统、进程列表等还是隔离的。两个容器的进程可以通过 lo 网卡设备通信。 1、创建一个基于brdige名为http的容器

[root@localhost ~]# docker run -itd --name http busybox:latest [root@localhost ~]# docker exec -it http sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever / #

2、创建一个基于http的网卡并容器名为zabbix的容器

[root@localhost ~]# docker run -itd --name zabbix --network container:http busybox:latest 358878ea33ab13259024d713babdca7679b295c25c5dc8c55f620401c9fcf781 [root@localhost ~]# docker exec -it zabbix sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever / #

由于这种网络的特殊性,一般在运行同一个服务,并且合格服务需要做监控,已经日志收集、或者网络监控的时候,可以选择这种网络

二、端口映射

1、手动指定端口映射关系

[root@localhost ~]# docker run -itd --name nginx-1 -p 8080:80 nginx:latest

2、从宿主机随机映射端口到容器

[root@localhost ~]# docker run -itd --name nginx-1 -p 80 nginx:latest

3、从宿主机随机映射端口到容器

容器内所有暴露端口,都会一一映射

[root@localhost ~]# docker run -itd --name nginx-1 -P nginx:latest

三、overlay跨主机网络

条件: 必须安装key-value存储服务,如consul 宿主机已经安装docker engine 宿主机的hostname必须不同

1、运行consul服务

[root@docker01 ~]# docker run -d -p 8500:8500 -h consul --name consul --restart always progrium/consul -server -bootstrap [root@docker01 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1da6c5ffe278 progrium/consul "/bin/start -server …" 6 seconds ago Up 5 seconds 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp consul [root@docker01 ~]# vim /usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2376 --cluster-store=consul://192.168.1.10:8500 --cluster-advertise=ens33:2376 [root@docker01 ~]# systemctl daemon-reload [root@docker01 ~]# systemctl restart docker

2、修改docker配置文件

[root@docker01 ~]# scp /usr/lib/systemd/system/docker.service root@192.168.1.20:/usr/lib/systemd/system/docker.service [root@docker01 ~]# scp /usr/lib/systemd/system/docker.service root@192.168.1.30:/usr/lib/systemd/system/docker.service [root@docker02 ~]# systemctl daemon-reload [root@docker02 ~]# systemctl restart docker

3、创建自定义网络

[root@docker01 ~]# docker network create -d overlay ov_net

4、查看所有主机网卡

[root@docker01 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 75eb637a1b83 bridge bridge local cc4585529aa8 host host local 2dd36beaccac none null local aa5c0adbb721 ov_net overlay global [root@docker02 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE d8545106da13 bridge bridge local 9ee27a08a0b8 host host local de00fe3c3c55 none null local aa5c0adbb721 ov_net overlay global [root@docker03 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 0b1752eb6d1a bridge bridge local 44831ab8a762 host host local ab5ee3f62401 none null local aa5c0adbb721 ov_net overlay global

5、根据ov_net网卡自定义网络

默认这张网卡的网段是10.0.0.0网段,可也以手动指定,方法同上

[root@docker01 ~]# docker run -itd --name ovnet1 --network ov_net busybox:latest

四、MacVlan

1、MacVlan的单网络通信

1、打开网卡混杂模式

[root@docker01 ~]# ip link set ens33 promisc on [root@docker01 ~]# ip link show ens33 2: ens33: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:96:0a:ec brd ff:ff:ff:ff:ff:ff

2、创建macvlan网络

[root@docker01 ~]# docker network create -d macvlan --subnet 172.22.16.0/24 --gateway 172.22.16.1 -o parent=ens33 mac_net1

3、基于创建的macvlan网络运行一个容器

[root@docker01 ~]# docker run -itd --name bbox1 --ip 172.22.16.10 --network mac_net1 busybox

4、在docker02上创建macvlan网络,注意与docker01上的macvlan网络一模一样

[root@docker02 ~]# ip link set ens33 promisc on [root@docker02 ~]# ip link show ens33 2: ens33: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:45:b5:2d brd ff:ff:ff:ff:ff:ff [root@docker02 ~]# docker network create -d macvlan --subnet 172.22.16.0/24 --gateway 172.22.16.1 -o parent=ens33 mac_net1 [root@docker02 ~]# docker run -itd --name bbox2 --ip 172.22.16.11 --network mac_net1 busybox

5、容器之间进行通信

[root@docker02 ~]# docker exec -it bbox2 sh / # ping 172.22.16.10 PING 172.22.16.10 (172.22.16.10): 56 data bytes 64 bytes from 172.22.16.10: seq=0 ttl=64 time=0.772 ms 64 bytes from 172.22.16.10: seq=1 ttl=64 time=0.536 ms 64 bytes from 172.22.16.10: seq=2 ttl=64 time=0.463 ms 64 bytes from 172.22.16.10: seq=3 ttl=64 time=0.444 ms ^C

2、MacVlan的多网络通信

1、验证内核模块8021q封装

[root@docker01 ~]# modinfo 8021q filename: /lib/modules/3.10.0-1127.18.2.el7.x86_64/kernel/net/8021q/8021q.ko.xz version: 1.8 license: GPL alias: rtnl-link-vlan retpoline: Y rhelversion: 7.8 srcversion: 1DD872AF3C7FF7FFD5B14D5 depends: mrp,garp intree: Y vermagic: 3.10.0-1127.18.2.el7.x86_64 SMP mod_unload modversions signer: CentOS Linux kernel signing key sig_key: C6:5D:F3:F8:0C:5C:C3:53:A7:25:6E:1F:8E:44:52:89:1E:D8:9C:FE sig_hashalgo: sha256 [root@docker02 ~]# modinfo 8021q filename: /lib/modules/3.10.0-1127.18.2.el7.x86_64/kernel/net/8021q/8021q.ko.xz version: 1.8 license: GPL alias: rtnl-link-vlan retpoline: Y rhelversion: 7.8 srcversion: 1DD872AF3C7FF7FFD5B14D5 depends: mrp,garp intree: Y vermagic: 3.10.0-1127.18.2.el7.x86_64 SMP mod_unload modversions signer: CentOS Linux kernel signing key sig_key: C6:5D:F3:F8:0C:5C:C3:53:A7:25:6E:1F:8E:44:52:89:1E:D8:9C:FE sig_hashalgo: sha256

如果内核模块没有开启,运行以下命令

[root@docker01 ~]# modprobe 8021q

2、基于ens33创建虚拟网卡 修改ens33网卡配置文件

[root@docker01 ~]# cd /etc/sysconfig/network-scripts/ [root@docker01 network-scripts]# vim ifcfg-ens33 BOOTPROTO="manual"

2、手动添加虚拟网卡配置文件

[root@docker01 network-scripts]# cp -p ifcfg-ens33 ifcfg-ens33.10 [root@docker01 network-scripts]# vim ifcfg-ens33.10 BOOTPROTO="manual" NAME="ens33.10" DEVICE="ens33.10" ONBOOT="yes" IPADDR="192.168.10.10" PREFIX="24" GATEWAY="192.168.10.1" VLAN=yes [root@docker01 network-scripts]# cp ifcfg-ens33.10 ifcfg-ens33.20 [root@docker01 network-scripts]# vim ifcfg-ens33.20 BOOTPROTO="manual" NAME="ens33.20" DEVICE="ens33.20" ONBOOT="yes" IPADDR="192.168.20.10" PREFIX="24" GATEWAY="192.168.20.1" VLAN=yes

3、启用创建的虚拟网卡

[root@docker01 network-scripts]# ifup ifcfg-ens33.10 [root@docker01 network-scripts]# ifup ifcfg-ens33.20 [root@docker01 network-scripts]# ip a ...... 11: ens33.10@ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 00:0c:29:96:0a:ec brd ff:ff:ff:ff:ff:ff inet 192.168.10.10/24 brd 192.168.10.255 scope global ens33.10 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe96:aec/64 scope link valid_lft forever preferred_lft forever 12: ens33.20@ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 00:0c:29:96:0a:ec brd ff:ff:ff:ff:ff:ff inet 192.168.20.10/24 brd 192.168.20.255 scope global ens33.20 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe96:aec/64 scope link valid_lft forever preferred_lft forever

4、在docker02上进行操作,创建网络及容器

[root@docker01 ~]# cd /etc/sysconfig/network-scripts/ [root@docker01 network-scripts]# scp ifcfg-ens33.10 root@192.168.1.20:/etc/sysconfig/network-scripts/ [root@docker01 network-scripts]# scp ifcfg-ens33.20 root@192.168.1.20:/etc/sysconfig/network-scripts/ [root@docker02 ~]# cd /etc/sysconfig/network-scripts/ [root@docker02 network-scripts]# vim ifcfg-ens33.10 BOOTPROTO="manual" NAME="ens33.10" DEVICE="ens33.10" ONBOOT="yes" IPADDR="192.168.10.11" PREFIX="24" GATEWAY="192.168.10.1" VLAN=yes [root@docker02 network-scripts]# vim ifcfg-ens33.20 BOOTPROTO="manual" NAME="ens33.20" DEVICE="ens33.20" ONBOOT="yes" IPADDR="192.168.20.11" PREFIX="24" GATEWAY="192.168.20.1" VLAN=yes [root@docker02 network-scripts]# ifup ifcfg-ens33.10 [root@docker02 network-scripts]# ifup ifcfg-ens33.20 [root@docker02 network-scripts]# ip a 7: ens33.10@ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 00:0c:29:45:b5:2d brd ff:ff:ff:ff:ff:ff inet 192.168.10.11/24 brd 192.168.10.255 scope global ens33.10 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe45:b52d/64 scope link valid_lft forever preferred_lft forever 8: ens33.20@ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 00:0c:29:45:b5:2d brd ff:ff:ff:ff:ff:ff inet 192.168.20.11/24 brd 192.168.20.255 scope global ens33.20 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe45:b52d/64 scope link valid_lft forever preferred_lft forever

5、基于ens33.10和ens33.20创建网卡

[root@docker01 ~]# docker network create -d macvlan --subnet 172.16.10.0/24 --gateway 172.16.10.1 -o parent=ens33.10 mac_net10 [root@docker01 ~]# docker network create -d macvlan --subnet 172.16.20.0/24 --gateway 172.16.20.1 -o parent=ens33.20 mac_net20 [root@docker02 ~]# docker network create -d macvlan --subnet 172.16.10.0/24 --gateway 172.16.10.1 -o parent=ens33.10 mac_net10 [root@docker02 ~]# docker network create -d macvlan --subnet 172.16.20.0/24 --gateway 172.16.20.1 -o parent=ens33.20 mac_net20

6、运行容器并验证

[root@docker01 ~]# docker run -itd --name bbox10 --network mac_net10 --ip 172.16.10.10 busybox:latest [root@docker01 ~]# docker run -itd --name bbox20 --network mac_net20 --ip 172.16.20.20 busybox:latest [root@docker02 ~]# docker run -itd --name bbox10 --network mac_net10 --ip 172.16.10.11 busybox:latest [root@docker02 ~]# docker run -itd --name bbox20 --network mac_net20 --ip 172.16.20.21 busybox:latest [root@docker01 ~]# docker exec -it bbox10 sh / # ping 172.16.10.11 PING 172.16.10.11 (172.16.10.11): 56 data bytes 64 bytes from 172.16.10.11: seq=0 ttl=64 time=0.490 ms 64 bytes from 172.16.10.11: seq=1 ttl=64 time=0.476 ms 64 bytes from 172.16.10.11: seq=2 ttl=64 time=0.546 ms 64 bytes from 172.16.10.11: seq=3 ttl=64 time=0.555 ms ^C --- 172.16.10.11 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.476/0.516/0.555 ms / # [root@docker01 ~]# docker exec -it bbox20 sh / # ping 172.16.20.21 PING 172.16.20.21 (172.16.20.21): 56 data bytes 64 bytes from 172.16.20.21: seq=0 ttl=64 time=0.672 ms 64 bytes from 172.16.20.21: seq=1 ttl=64 time=0.644 ms 64 bytes from 172.16.20.21: seq=2 ttl=64 time=0.705 ms 64 bytes from 172.16.20.21: seq=3 ttl=64 time=0.570 ms ^C --- 172.16.20.21 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.570/0.647/0.705 ms / #

注意:如果是在VMware Workstation中运行,可能会出现无法连接的情况,需要把网络模式调成桥接模式。

最新回复(0)