一文读懂docker(下)

详细介绍了docker的镜像、仓库、网络、数据卷等内容
内容 隐藏
1 6、Docker 数据管理
2 7、网络管理
2.4 7.4 实现跨宿主机的容器之间网络互联
3 8、Docker 仓库管理
3.4 8.4 ★★Docker 之分布式仓库 Harbor★★

6、Docker 数据管理

Docker镜像由多个只读层叠加而成,启动容器时,Docker会加载只读镜像层并在镜像栈顶部添加一个读写层

如果运行中的容器修改了现有的一个已经存在的文件,那该文件将会从读写层下面的只读层复制到读写层,该文件的只读版本仍然存在,只是已经被读写层中该文件的副本所隐藏,此即“写时复制(COW copy on write)”机制

如果将正在运行中的容器修改生成了新的数据,那么新产生的数据将会被复制到读写层,进行持久化保存,这个读写层也就是容器的工作目录,也为写时复制(COW) 机制。

COW机制节约空间,但会导致性低下,虽然关闭重启容器,数据不受影响,但会随着容器的删除,其对应的可写层也会随之而删除,即数据也会丢失.如果容器需要持久保存数据,并不影响性能可以用数据卷技术实现

如下图是将对根的数据写入到了容器的可写层,但是把/data 中的数据写入到了一个另外的volume 中用于数据持久化

6.1 容器的数据管理介绍

Docker镜像是分层设计的,镜像层是只读的,通过镜像启动的容器添加了一层可读写的文件系统,用户写入的数据都保存在这一层中。

6.1.1 Docker容器的分层

容器的数据分层目录

  • LowerDir: image 镜像层,即镜像本身,只读
  • UpperDir: 容器的上层,可读写 ,容器变化的数据存放在此处
  • MergedDir: 容器的文件系统,使用Union FS(联合文件系统)将lowerdir 和 upperdir 合并完成后给容器使用,最终呈现给用户的统一视图
  • WorkDir: 容器在宿主机的工作目录,挂载后内容会被清空,且在使用过程中其内容用户不可见
#查看指定容器数据分层(需要找一个正在运行的容器进行查看)
#查看为运行的容器的详细信息
root@ubuntu1804:~# docker inspect  ubuntu:18.04
        "GraphDriver": {
            "Data": {
                "MergedDir": "/var/lib/docker/overlay2/bf68e1d7507660756f42300aa190406f4b9c43cce6ec7aa948fbcc9e0cefa6f1/merged",
                "UpperDir": "/var/lib/docker/overlay2/bf68e1d7507660756f42300aa190406f4b9c43cce6ec7aa948fbcc9e0cefa6f1/diff",
                "WorkDir": "/var/lib/docker/overlay2/bf68e1d7507660756f42300aa190406f4b9c43cce6ec7aa948fbcc9e0cefa6f1/work"
            },
            "Name": "overlay2"
        },

#查看此镜像运行后的容器的详细信息
root@ubuntu1804:~# docker inspect  ubuntu:18.04
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a-init/diff:/var/lib/docker/overlay2/bf68e1d7507660756f42300aa190406f4b9c43cce6ec7aa948fbcc9e0cefa6f1/diff",
                "MergedDir": "/var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a/merged",
                "UpperDir": "/var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a/diff",
                "WorkDir": "/var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a/work"
            },
            "Name": "overlay2"
        },


#查看镜像文件的目录信息
root@ubuntu1804:/# ll /var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a
total 28
drwx------  5 root root 4096 Nov  6 14:54 ./
drwx------ 34 root root 4096 Nov  6 14:54 ../
drwxr-xr-x  3 root root 4096 Nov  6 14:54 diff/
-rw-r--r--  1 root root   26 Nov  6 14:54 link
-rw-r--r--  1 root root   57 Nov  6 14:54 lower
drwxr-xr-x  1 root root 4096 Nov  6 14:54 merged/
drwx------  3 root root 4096 Nov  6 14:54 work/

root@ubuntu1804:/# ll /var/lib/docker/overlay2/bf68e1d7507660756f42300aa190406f4b9c43cce6ec7aa948fbcc9e0cefa6f1
total 16
drwx------  3 root root 4096 Nov  5 19:02 ./
drwx------ 34 root root 4096 Nov  6 14:54 ../
-rw-------  1 root root    0 Nov  6 14:54 committed
drwxr-xr-x 21 root root 4096 Nov  5 19:02 diff/
-rw-r--r--  1 root root   26 Nov  5 19:02 link

#每个镜像层目录中包含了一个文件link和目录diff,link文件内容则是当前层对应的短标识符,镜像层的内容则存放在diff目录


#在容器上创建文件
root@ubuntu1804:~# docker run -it ubuntu:18.04 
root@dc609bb7b2e5:/# dd if=/dev/zero of=/root/test.img bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.00537957 s, 1.9 GB/s
root@dc609bb7b2e5:/# 



#查看容器中创建的文件在宿主机上的路径
root@ubuntu1804:/# find /var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a -name test.img -ls
  5642867  10240 -rw-r--r--   1 root     root     10485760 Nov  6 14:58 /var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a/diff/root/test.img
  5642867  10240 -rw-r--r--   1 root     root     10485760 Nov  6 14:58 /var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a/merged/root/test.img


#查看挂载信息
root@ubuntu1804:/# mount
overlay on /var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a/merged type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/RBZCPUXVNW2ZRMGKKZ5L7NPO4T:/var/lib/docker/overlay2/l/KEETZYPITKIYPUC6R6RZOFPL2D,upperdir=/var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a/diff,workdir=/var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a/work)
nsfs on /run/docker/netns/cd74505a8218 type nsfs (rw)

root@ubuntu1804:/# df
Filesystem     1K-blocks    Used Available Use% Mounted on
udev              461032       0    461032   0% /dev
tmpfs              98532   10060     88472  11% /run
/dev/sda1       95595940 3893280  86803540   5% /
tmpfs             492652       0    492652   0% /dev/shm
tmpfs               5120       0      5120   0% /run/lock
tmpfs             492652       0    492652   0% /sys/fs/cgroup
/dev/sda6       47797996   55400  45284844   1% /data
/dev/sda5         944120   79028    799916   9% /boot
tmpfs              98528       0     98528   0% /run/user/1000
overlay         95595940 3893280  86803540   5% /var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a/merged


#查看目录树结构
root@ubuntu1804:/# tree /var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a/diff/
/var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a/diff/
└── root
    └── test.img

1 directory, 1 file


#修改原先就有的文件
root@dc609bb7b2e5:/# echo  welcome to haha >> /etc/issue
root@dc609bb7b2e5:/# cat /etc/issue
Ubuntu 18.04.6 LTS \n \l

welcome to haha


#再次查看目录树结构
root@ubuntu1804:/# tree /var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a/diff/
/var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a/diff/
├── etc
│   └── issue
└── root
    └── test.img

2 directories, 2 files


#删除容器后,所有容器数据目录都随之而删除
root@ubuntu1804:/# docker ps 
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
dc609bb7b2e5        ubuntu:18.04        "bash"              22 minutes ago      Up 22 minutes                           wizardly_chaum

root@ubuntu1804:/# docker rm -f dc609bb7b2e5 
dc609bb7b2e5

root@ubuntu1804:/# ls /var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a
ls: cannot access '/var/lib/docker/overlay2/955cb4175e12d97161ccf4841d42d3ec3ca593b0a8bdd9d8e6236dffbb11b18a': No such file or directory
root@ubuntu1804:/# 


6.1.2 哪些数据需要持久化

有状态的协议

有状态协议就是就通信双方要记住双方,并且共享一些信息。而无状态协议的通信每次都是独立的,与上一次的通信没什么关系。
“状态”可以理解为“记忆”,有状态对应有记忆,无状态对应无记忆

  • 左侧是无状态的http请求服务,右侧为有状态
  • 下层为不需要存储的服务,上层为需要存储的部分服务

6.1.3 容器数据持久保存方式

如果要将写入到容器的数据永久保存,则需要将容器中的数据保存到宿主机的指定目录

Docker的数据类型分为两种:

  • 数据卷(Data Volume): 直接将宿主机目录挂载至容器的指定的目录 ,推荐使用此种方式,此方式较常用
  • 数据卷容器(Data Volume Container): 间接使用宿主机空间,数据卷容器是将宿主机的目录挂载至一个专门的数据卷容器,然后让其他容器通过数据卷容器读写宿主机的数据 ,此方式不常用

6.2 ★★数据卷(data volume)★★

6.2.1 数据卷特点和使用

数据卷实际上就是宿主机上的目录或者是文件,可以被直接mount到容器当中使用

实际生成环境中,需要针对不同类型的服务、不同类型的数据存储要求做相应的规划,最终保证服务的可扩展性、稳定性以及数据的安全性

6.2.1.1 数据卷使用场景

  • 数据库
  • 日志输出
  • 静态web页面
  • 应用配置文件
  • 多容器间目录或文件共享

6.2.1.2 数据卷的特点

  • 数据卷是目录或者文件,并且可以在多个容器之间共同使用,实现容器之间共享和重用
  • 对数据卷更改数据在所有容器里面会立即更新。
  • 数据卷的数据可以持久保存,即使删除使用使用该容器卷的容器也不影响。
  • 在容器里面的写入数据不会影响到镜像本身,即数据卷的变化不会影响镜像的更新
  • 依赖于宿主机目录,宿主机出问题,上面容器会受影响,当宿主机较多时,不方便统一管理
  • 匿名和命名数据卷在容器启动时初始化,如果容器使用的镜像在挂载点包含了数据,会拷贝到新初始化的数据卷中

6.2.1.3 数据卷使用方法

启动容器时,可以指定使用数据卷实现容器数据的持久化,数据卷有三种

  • 指定宿主机目录或文件: 指定宿主机的具体路径和容器路径的挂载关系
  • 匿名卷: 不指定数据名称,只指定容器内目录路径充当挂载点,docker自动指定宿主机的路径进行挂载
  • 命名卷: 指定数据卷的名称和容器路径的挂载关系

docker run 命令的以下格式可以实现数据卷

-v, --volume=[host-src:]container-dest[:<options>]
<options>
ro 从容器内对此数据卷是只读,不写此项默认为可读可写
rw 从容器内对此数据卷可读可写,此为默认值

方式1

#指定宿主机目录或文件格式:
-v  <宿主机绝对路径的目录或文件>:<容器目录或文件>[:ro]  #将宿主机目录挂载容器目录,两个目录都可自动创建

方式2

#匿名卷,只指定容器内路径,没有指定宿主机路径信息,宿主机自动生成/var/lib/docker/volumes/<卷ID>/_data目录,并挂载至容器指定路径
-v <容器内路径>
#示例:
docker run --name nginx -v /etc/nginx nginx

方式3

#命名卷将固定的存放在/var/lib/docker/volumes/<卷名>/_data
-v <卷名>:<容器目录路径>
#可以通过以下命令事先创建,如可没有事先创建卷名,docker run时也会自动创建卷
docker volume create <卷名>
#示例:
docker run -d  -p 80:80 --name nginx01 -v vol1:/usr/share/nginx/html nginx

docker rm 的 -v 选项可以删除容器时,同时删除相关联的匿名卷

-v, --volumes  Remove the volumes associated with the container

管理卷命令

docker volume COMMAND
Commands:
create      Create a volume
inspect     Display detailed information on one or more volumes
 ls         List volumes
prune       Remove all unused local volumes
 rm         Remove one or more volumes

关于匿名数据卷和命名数据卷

命名卷就是有名字的卷,使用 docker volume create <卷名> 形式创建并命名的卷;而匿名卷就是没名字的卷,一般是 docker run -v /data 这种不指定卷名的时候所产生,或者 Dockerfile 里面的定义直接使用的。

有名字的卷,在用过一次后,以后挂载容器的时候还可以使用,因为有名字可以指定。所以一般需要保存的数据使用命名卷保存。
而匿名卷则是随着容器建立而建立,随着容器消亡而淹没于卷列表中(对于 docker run 匿名卷不会被自动删除)。 因此匿名卷只存放无关紧要的临时数据,随着容器消亡,这些数据将失去存在的意义。

Dockerfile中指定VOLUME为匿名数据卷,其目的只是为了将某个路径确定为卷。

按照最佳实践的要求,不应该在容器存储层内进行数据写入操作,所有写入应该使用卷。如果定制镜像的时候,就可以确定某些目录会发生频繁大量的读写操作,那么为了避免在运行时由于用户疏忽而忘记指定卷,导致容器发生存储层写入的问题,就可以在 Dockerfile 中使用 VOLUME 来指定某些目录为匿名卷。这样即使用户忘记了指定卷,也不会产生不良的后果。
这个设置可以在运行时覆盖。通过 docker run 的 -v 参数或者 docker-compose.yml 的 volumes指定。使用命名卷的好处是可以复用,其它容器可以通过这个命名数据卷的名字来指定挂载,共享其内容(不过要注意并发访问的竞争问题)。

比如,Dockerfile 中说 VOLUME /data,那么如果直接 docker run,其 /data 就会被挂载为匿名卷,向 /data 写入的操作不会写入到容器存储层,而是写入到了匿名卷中。但是如果运行时 docker run -v mydata:/data,这就覆盖了 /data 的挂载设置,要求将 /data 挂载到名为 mydata 的命名卷中。
所以说 Dockerfile 中的 VOLUME 实际上是一层保险,确保镜像运行可以更好的遵循最佳实践,不向容器存储层内进行写入操作。

数据卷默认可能会保存于 /var/lib/docker/volumes,不过一般不需要、也不应该访问这个位置。

查看数据卷的挂载关系

docker inspect --format="{{.Mounts}}" <容器ID>

删除所有数据卷

root@ubuntu1804:~# docker volume rm `docker volume ls -q`

6.2.2 实战案例: 目录数据卷

6.2.2.1 在宿主机创建容器所使用的目录

root@ubuntu1804:~# mkdir /data/testdir
root@ubuntu1804:~# echo  haha > /data/testdir/index.html


6.2.2.2 查看容器相关目录路径

root@ubuntu1804:~# docker run  -it --rm centos7-nginx:3.0 sh
sh-4.2# cat /app/nginx/conf/nginx.conf
...
 server {
...
        location / {
            root   html;                 #nginx存放网页文件的路径,这里是相对路径/app/nginx
            index  index.html index.htm;
        }
...


sh-4.2# cat /app/nginx/html/index.html 
hello nginx
sh-4.2# 
sh-4.2# exit
exit

root@ubuntu1804:~# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES


6.2.2.3 引用宿主机的数据卷启动容器

引用数据卷目录,开启多个容器

root@ubuntu1804:~# docker run -d -v /data/testdir/:/app/nginx/html/ -p 8081:80 centos7-nginx:3.0 
96102f10c6a04c1646443c995d5743f754c3dbc5e4f496a356cd8c7a2b6161f6

root@ubuntu1804:~# docker run -d -v /data/testdir/:/app/nginx/html/ -p 8082:80 centos7-nginx:3.0 
e739547c28e58f83495062b080db02aa9d452290322ec9700ba292945c15f362

root@ubuntu1804:~# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                           NAMES
e739547c28e5        centos7-nginx:3.0   "nginx -g 'daemon of…"   6 seconds ago       Up 5 seconds        443/tcp, 0.0.0.0:8082->80/tcp   gracious_pascal
96102f10c6a0        centos7-nginx:3.0   "nginx -g 'daemon of…"   42 seconds ago      Up 41 seconds       443/tcp, 0.0.0.0:8081->80/tcp   agitated_tu

root@ubuntu1804:~# curl 127.0.0.1:8081
haha
root@ubuntu1804:~# curl 127.0.0.1:8082
haha


6.2.2.3 进入到容器内测试写入数据

进入其中一个容器向数据卷写入数据,关联此数据卷的其它容器的数据也变化

#进入其中一个容器,修改数据
root@ubuntu1804:~# docker exec -it 96102f10c6a0 sh
sh-4.2# df
Filesystem     1K-blocks    Used Available Use% Mounted on
overlay         95595940 3883260  86813560   5% /
tmpfs              65536       0     65536   0% /dev
tmpfs             492652       0    492652   0% /sys/fs/cgroup
shm                65536       0     65536   0% /dev/shm
/dev/sda1       95595940 3883260  86813560   5% /etc/hosts
/dev/sda6       47797996   55408  45284836   1% /app/nginx/html
tmpfs             492652       0    492652   0% /proc/asound
tmpfs             492652       0    492652   0% /proc/acpi
tmpfs             492652       0    492652   0% /proc/scsi
tmpfs             492652       0    492652   0% /sys/firmware
sh-4.2# cat /app/nginx/html/index.html 
haha
sh-4.2# echo 'lalahuhu' > /app/nginx/html/index.html 
sh-4.2# 

#进入另一个容器,进行查看
root@ubuntu1804:/# docker exec -it e739547c28e5 sh
sh-4.2# cat /app/nginx/html/index.html 
lalahuhu
sh-4.2# 

#url访问
root@ubuntu1804:/# curl 127.0.0.1:8081
lalahuhu
root@ubuntu1804:/# curl 127.0.0.1:8082
lalahuhu


6.2.2.4 在宿主机修改数据

root@ubuntu1804:/# echo 'heihei' > /data/testdir/index.html 

#url访问
root@ubuntu1804:/# curl 127.0.0.1:8082
heihei
root@ubuntu1804:/# curl 127.0.0.1:8081
heihei

#进入容器中查看
root@ubuntu1804:~# docker exec -it 96102f10c6a0 sh
sh-4.2# cat /app/nginx/html/index.html 
heihei


6.2.2.5 只读方法挂载数据卷

默认数据卷为可读可写,加ro选项,可以实现只读挂载,对于不希望容器修改的数据,比如: 配置文件,脚本等,可以用此方式挂载

root@ubuntu1804:/# docker run -d -v /data/testdir/:/app/nginx/html/:ro -p 8083:80 centos7-nginx:3.0 
ccdb2edbc925acf6ba413987df396a6a8969a488c27df2f5b4a19ed167c1fadd

root@ubuntu1804:/# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                           NAMES
ccdb2edbc925        centos7-nginx:3.0   "nginx -g 'daemon of…"   10 seconds ago      Up 9 seconds        443/tcp, 0.0.0.0:8083->80/tcp   elastic_dubinsky
e739547c28e5        centos7-nginx:3.0   "nginx -g 'daemon of…"   14 minutes ago      Up 14 minutes       443/tcp, 0.0.0.0:8082->80/tcp   gracious_pascal
96102f10c6a0        centos7-nginx:3.0   "nginx -g 'daemon of…"   14 minutes ago      Up 14 minutes       443/tcp, 0.0.0.0:8081->80/tcp   agitated_tu

root@ubuntu1804:/# docker exec -it ccdb2edbc925 sh
sh-4.2# cat /app/nginx/html/index.html 
heihei
sh-4.2# echo 'papapa' > /app/nginx/html/index.html 
sh: /app/nginx/html/index.html: Read-only file system

sh-4.2# ls -l /app/nginx/html/index.html 
-rw-r--r-- 1 root root 7 Nov  6 08:16 /app/nginx/html/index.html

sh-4.2# chown +w /app/nginx/html/index.html 
chown: invalid user: '+w'
sh-4.2# ls -l /app/nginx/html/index.html 
-rw-r--r-- 1 root root 7 Nov  6 08:16 /app/nginx/html/index.html

sh-4.2# chown 777 /app/nginx/html/index.html 
chown: changing ownership of '/app/nginx/html/index.html': Read-only file system
sh-4.2# ls -l /app/nginx/html/index.html 
-rw-r--r-- 1 root root 7 Nov  6 08:16 /app/nginx/html/index.html



6.2.2.6 删除容器

删除容器后,宿主机的数据卷还存在,可继续给新的容器使用

root@ubuntu1804:/# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                           NAMES
ccdb2edbc925        centos7-nginx:3.0   "nginx -g 'daemon of…"   5 minutes ago       Up 5 minutes        443/tcp, 0.0.0.0:8083->80/tcp   elastic_dubinsky
e739547c28e5        centos7-nginx:3.0   "nginx -g 'daemon of…"   19 minutes ago      Up 19 minutes       443/tcp, 0.0.0.0:8082->80/tcp   gracious_pascal
323eca5a237d        centos7-nginx:3.0   "nginx -g 'daemon of…"   19 minutes ago      Created                                             hardcore_mccarthy
96102f10c6a0        centos7-nginx:3.0   "nginx -g 'daemon of…"   19 minutes ago      Up 19 minutes       443/tcp, 0.0.0.0:8081->80/tcp   agitated_tu
root@ubuntu1804:/# docker rm -f `docker ps -aq`
ccdb2edbc925
e739547c28e5
323eca5a237d
96102f10c6a0
root@ubuntu1804:/# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
root@ubuntu1804:/# cat /data/testdir/index.html 
heihei

#新建的容器还可以继续使用原有的数据卷
root@ubuntu1804:/# docker run -d -v /data/testdir/:/app/nginx/html/ -p 8084:80 centos7-nginx:3.0 
58ac9a1bb915f496ff83dc3f0111913f7f9b970cf7f46670e986019c507e60f8
root@ubuntu1804:/# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                           NAMES
58ac9a1bb915        centos7-nginx:3.0   "nginx -g 'daemon of…"   13 seconds ago      Up 12 seconds       443/tcp, 0.0.0.0:8084->80/tcp   affectionate_roentgen
root@ubuntu1804:/# curl 127.0.0.1:8084
heihei


6.2.3 安战案例: MySQL使用的数据卷

#下载mysql镜像
root@ubuntu1804:/# docker pull mysql:5.7.30
5.7.30: Pulling from library/mysql
8559a31e96f4: Pull complete 
d51ce1c2e575: Pull complete 
c2344adc4858: Pull complete 
fcf3ceff18fc: Pull complete 
16da0c38dc5b: Pull complete 
b905d1797e97: Pull complete 
4b50d1c6b05c: Pull complete 
d85174a87144: Pull complete 
a4ad33703fa8: Pull complete 
f7a5433ce20d: Pull complete 
3dcd2a278b4a: Pull complete 
Digest: sha256:32f9d9a069f7a735e28fd44ea944d53c61f990ba71460c5c183e610854ca4854
Status: Downloaded newer image for mysql:5.7.30
docker.io/library/mysql:5.7.30

root@ubuntu1804:/# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
mysql               5.7.30              9cfcce23593a        17 months ago       448MB

#运行mysql容器,指定端口,指定密码
root@ubuntu1804:/# docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 mysql:5.7.30 
aed9e8c7b2811820efc3ae1c4946a847de618d04b2fe978e9d964fa677bc35c9
root@ubuntu1804:/# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                               NAMES
aed9e8c7b281        mysql:5.7.30        "docker-entrypoint.s…"   21 seconds ago      Up 19 seconds       0.0.0.0:3306->3306/tcp, 33060/tcp   blissful_rhodes


root@ubuntu1804:/# docker exec -it aed9e8c7b281 bash
root@aed9e8c7b281:/# cat /etc/issue
Debian GNU/Linux 10 \n \l

root@aed9e8c7b281:/# cat /etc/mysql/my.cnf
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/

root@aed9e8c7b281:/# cat /etc/mysql/mysql.conf.d/mysqld.cnf 
[mysqld]
pid-file    = /var/run/mysqld/mysqld.pid
socket      = /var/run/mysqld/mysqld.sock
datadir     = /var/lib/mysql
#log-error = /var/log/mysql/error.log
# By default we only accept connections from localhost
#bind-address  = 127.0.0.1
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

root@aed9e8c7b281:/# pstree -p
mysqld(1)-+-{mysqld}(127)
          |-{mysqld}(128)
          |-{mysqld}(129)
          |-{mysqld}(130)
          |-{mysqld}(131)
          |-{mysqld}(132)
          |-{mysqld}(133)
          |-{mysqld}(134)
          |-{mysqld}(135)
          |-{mysqld}(136)
          |-{mysqld}(137)
          |-{mysqld}(138)
          |-{mysqld}(140)
          |-{mysqld}(141)
          |-{mysqld}(142)
          |-{mysqld}(143)
          |-{mysqld}(144)
          |-{mysqld}(145)
          |-{mysqld}(146)
          |-{mysqld}(147)
          |-{mysqld}(148)
          |-{mysqld}(149)
          |-{mysqld}(150)
          |-{mysqld}(151)
          |-{mysqld}(152)
          `-{mysqld}(153)

#mysql客户端连接mysql容器,创建表
[root@CT7test1 ~]# yum install -y mysql
[root@CT7test1 ~]# mysql -uroot -p123456 -h10.0.0.110
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.30 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> 
MySQL [(none)]> 
MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)

MySQL [(none)]> create database dockerdb;
Query OK, 1 row affected (0.00 sec)

MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| dockerdb           |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.01 sec)


#删除容器
root@ubuntu1804:/# docker rm -f aed9e8c7b281
aed9e8c7b281

#再次创建容器
root@ubuntu1804:/# docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 mysql:5.7.30 
6aa1d53940b436f1e707408c930e258df8bf063c8f57381b783697b3b55c4be5
root@ubuntu1804:/# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                               NAMES
6aa1d53940b4        mysql:5.7.30        "docker-entrypoint.s…"   50 seconds ago      Up 49 seconds       0.0.0.0:3306->3306/tcp, 33060/tcp   brave_wiles

#删除容器后,再创建新的容器,数据库信息丢失
root@ubuntu1804:~# mysql -uroot -p123456 -h10.0.0.110
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.30 MySQL Community Server (GPL)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)

#利用数据卷创建容器
root@ubuntu1804:/# docker run --name mysql-test1 -v /data/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 -e MYSQL_DATABASE=wordpress -e MYSQL_USER=wpuser -e MYSQL_PASSWORD=123456 -d -p 3306:3306 mysql:5.7.30

root@ubuntu1804:/# docker run --name mysql-test2 -v /root/mysql/:/etc/mysql/conf.d -v /data/mysql2:/var/lib/mysql --env-file=env.list -d -p 3307:3306 mysql:5.7.30

root@ubuntu1804:/# cat mysql/mysql-test.cnf
[mysqld]
server-id=100
log-bin=mysql-bin

root@ubuntu1804:/# cat env.list
MYSQL_ROOT_PASSWORD=123456
MYSQL_DATABASE=wordpress
MYSQL_USER=wpuser
MYSQL_PASSWORD=wppass

6.2.4 实战案例: 文件数据卷(此实例为Tomcat之后再做)

文件挂载用于很少更改文件内容的场景,比如: nginx 的配置文件、tomcat的配置文件等。

6.2.4.1 准备相关文件

6.2.5 实战案例: 匿名数据卷(nginx)

#检查一下本地的数据卷和进程
root@ubuntu1804:~# docker volume ls
DRIVER              VOLUME NAME
root@ubuntu1804:~# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

#利用匿名数据卷创建容器
root@ubuntu1804:~# docker run -d -p 80:80 --name nginx01 -v /usr/share/nginx/html nginx
d387cf9a3de9d7a8cf4471fdae15c3c1ae5aef9fb9353d2d8abc6c1caebf50e0

root@ubuntu1804:~# curl 127.0.0.1:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

#查看自动生成的匿名数据卷
root@ubuntu1804:~# docker volume ls
DRIVER              VOLUME NAME
local               72cab41a2a4e205dddc5b3ef1f9472c184fd8a16cf132c7bcfeb1ca3b32c9639

#查看匿名数据卷的详细信息
root@ubuntu1804:~# docker volume inspect 72cab41a2a4e205dddc5b3ef1f9472c184fd8a16cf132c7bcfeb1ca3b32c9639
[
    {
        "CreatedAt": "2021-11-07T08:00:56+08:00",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/72cab41a2a4e205dddc5b3ef1f9472c184fd8a16cf132c7bcfeb1ca3b32c9639/_data",
        "Name": "72cab41a2a4e205dddc5b3ef1f9472c184fd8a16cf132c7bcfeb1ca3b32c9639",
        "Options": null,
        "Scope": "local"
    }
]

root@ubuntu1804:~# docker inspect -f "{{.Mounts}}" nginx01
[{volume 72cab41a2a4e205dddc5b3ef1f9472c184fd8a16cf132c7bcfeb1ca3b32c9639 /var/lib/docker/volumes/72cab41a2a4e205dddc5b3ef1f9472c184fd8a16cf132c7bcfeb1ca3b32c9639/_data /usr/share/nginx/html local  true }]

#查看匿名数据卷的文件
root@ubuntu1804:~# ls /var/lib/docker/volumes/72cab41a2a4e205dddc5b3ef1f9472c184fd8a16cf132c7bcfeb1ca3b32c9639/_data
50x.html  index.html


#修改宿主机中匿名数据卷的文件
root@ubuntu1804:~# echo Anonymous Volume > /var/lib/docker/volumes/72cab41a2a4e205dddc5b3ef1f9472c184fd8a16cf132c7bcfeb1ca3b32c9639/_data/index.html 

root@ubuntu1804:~# curl 127.0.0.1
Anonymous Volume


#删除容器不会删除匿名数据卷
root@ubuntu1804:~# docker rm -f nginx01 
nginx01
root@ubuntu1804:~# docker volume ls
DRIVER              VOLUME NAME
local               72cab41a2a4e205dddc5b3ef1f9472c184fd8a16cf132c7bcfeb1ca3b32c9639


#再次创建容器查看数据卷信息
root@ubuntu1804:~# docker run -d -p 80:80 --name nginx01 -v /usr/share/nginx/html nginx
0edf8fca1abb6aaade5bdd7bfff5540d0a3aca657c11434de5f907addf5907b0

#查看之前创建匿名数据卷的宿主文件路径
root@ubuntu1804:~# cat /var/lib/docker/volumes/72cab41a2a4e205dddc5b3ef1f9472c184fd8a16cf132c7bcfeb1ca3b32c9639/_data/index.html 
Anonymous Volume
#之前生成的文件还存在

#访问nginx
root@ubuntu1804:~# curl 127.0.0.1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
#并不是之前的文件

#查看自动生成的匿名数据卷
root@ubuntu1804:~# docker volume ls
DRIVER              VOLUME NAME
local               72cab41a2a4e205dddc5b3ef1f9472c184fd8a16cf132c7bcfeb1ca3b32c9639
local               904f47bea6ade76943e2dd062430342b3ac71509077b046df0621b9c995ff66d
#生成了一个新的匿名数据卷


#查看新的匿名数据卷信息
root@ubuntu1804:~# docker inspect -f "{{.Mounts}}" nginx01 
[{volume 904f47bea6ade76943e2dd062430342b3ac71509077b046df0621b9c995ff66d /var/lib/docker/volumes/904f47bea6ade76943e2dd062430342b3ac71509077b046df0621b9c995ff66d/_data /usr/share/nginx/html local  true }]


root@ubuntu1804:~# cat /var/lib/docker/volumes/904f47bea6ade76943e2dd062430342b3ac71509077b046df0621b9c995ff66d/_data/index.html 
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>


#删除所有匿名数据卷
root@ubuntu1804:~# docker volume ls
DRIVER              VOLUME NAME
local               72cab41a2a4e205dddc5b3ef1f9472c184fd8a16cf132c7bcfeb1ca3b32c9639
local               904f47bea6ade76943e2dd062430342b3ac71509077b046df0621b9c995ff66d

root@ubuntu1804:~# docker volume ls -q
72cab41a2a4e205dddc5b3ef1f9472c184fd8a16cf132c7bcfeb1ca3b32c9639
904f47bea6ade76943e2dd062430342b3ac71509077b046df0621b9c995ff66d

root@ubuntu1804:~# docker volume rm `docker volume ls -q`
72cab41a2a4e205dddc5b3ef1f9472c184fd8a16cf132c7bcfeb1ca3b32c9639
Error response from daemon: remove 904f47bea6ade76943e2dd062430342b3ac71509077b046df0621b9c995ff66d: volume is in use - [0edf8fca1abb6aaade5bdd7bfff5540d0a3aca657c11434de5f907addf5907b0]      #正在运行的容器的数据卷无法删除

root@ubuntu1804:~# docker volume ls
DRIVER              VOLUME NAME
local               904f47bea6ade76943e2dd062430342b3ac71509077b046df0621b9c995ff66d

root@ubuntu1804:~# docker rm -f nginx01 
nginx01

root@ubuntu1804:~# docker volume rm  `docker volume ls -q`
904f47bea6ade76943e2dd062430342b3ac71509077b046df0621b9c995ff66d

root@ubuntu1804:~# docker volume ls
DRIVER              VOLUME NAME



6.2.6 实战案例: 命名数据卷(nginx)

6.2.6.1 创建命名数据卷

root@ubuntu1804:~# docker volume create haha
haha

root@ubuntu1804:~# docker volume ls
DRIVER              VOLUME NAME
local               haha

root@ubuntu1804:~# docker inspect haha
[
    {
        "CreatedAt": "2021-11-07T08:34:41+08:00",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/haha/_data",
        "Name": "haha",
        "Options": {},
        "Scope": "local"
    }
]


6.2.6.2 使用命名数据卷创建容器

root@ubuntu1804:~# docker run -d -p 8080:80 --name nginx01 -v haha:/usr/share/nginx/html nginx
094794098a120a7ce345c319bd0507c93182ba80481060e6d31a265554735b37
root@ubuntu1804:~# curl 127.0.0.1:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>


#显示命名数据卷
root@ubuntu1804:~# docker volume ls
DRIVER              VOLUME NAME
local               haha


#查看命名数据卷详解信息
root@ubuntu1804:~# docker volume inspect haha 
[
    {
        "CreatedAt": "2021-11-07T08:36:31+08:00",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/haha/_data",
        "Name": "haha",
        "Options": {},
        "Scope": "local"
    }
]

root@ubuntu1804:~# docker inspect -f "{{.Mounts}}" nginx01 
[{volume haha /var/lib/docker/volumes/haha/_data /usr/share/nginx/html local z true }]


#查看命名数据卷的文件
root@ubuntu1804:~# ls /var/lib/docker/volumes/haha/_data/
50x.html  index.html


#修改宿主机命名数据卷的文件
root@ubuntu1804:~# echo nginx haha website > /var/lib/docker/volumes/haha/_data/index.html 
root@ubuntu1804:~# curl 127.0.0.1:8080
nginx haha website


#利用现在的命名数据卷再创建新容器,可以和原有容器共享同一个命名数据卷的数据
root@ubuntu1804:~# docker run -d -p 8081:80 --name nginx02 -v haha:/usr/share/nginx/html nginx
89de0e1722b65b1480c04559203a0603d46deccab24da653834a6fe4f41718d1
root@ubuntu1804:~# curl 127.0.0.1:8081
nginx haha website


6.2.6.3 创建容器时自动创建命名数据卷

#创建容器自动创建命名数据卷
root@ubuntu1804:~# docker run -d -p 8082:80 --name nginx03 -v lala:/usr/share/nginx/html nginx
9d0526acd9d302b8609922c57df8e3b48178e27672e402101a674d987157bf2b
root@ubuntu1804:~# docker volume ls
DRIVER              VOLUME NAME
local               haha
local               lala

root@ubuntu1804:~# docker volume inspect lala
[
    {
        "CreatedAt": "2021-11-07T08:45:26+08:00",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/lala/_data",
        "Name": "lala",
        "Options": null,
        "Scope": "local"
    }
]


6.2.6.4 删除数据卷

#删除指定的命名数据卷
root@ubuntu1804:~# docker volume rm haha

#清理全部不再使用的卷
root@ubuntu1804:~# docker volume prune -f

6.3 ★★数据卷容器★★

6.3.1 数据卷容器介绍

Dockerfile中创建的是匿名数据卷,无法直接实现多个容器之间共享数据

数据卷容器最大的功能是可以让数据在多个docker容器之间共享

如下图所示: 即可以让B容器访问A容器的内容,而容器C也可以访问A容器的内容,即可以实现A,B,C三个容器之间的数据读写共享。

相当于先要创建一个后台运行的容器作为 Server,用于提供数据卷,这个卷可以为其他容器提供数据存储服务,其他使用此卷的容器作为client端 ,但此方法并不常使用

缺点: 因为依赖一个 Server 的容器,所以此 Server 容器出了问题,其它 Client容器都会受影响

6.3.2 使用数据卷容器

启动容器时,指定使用数据卷容器

docker run 命令的以下选项可以实现数据卷容器,格式如下:

--volumes-from <数据卷容器>   Mount volumes from the specified container(s)

6.3.3 实战案例: 数据卷容器

6.3.3.1 创建一个数据卷容器 Server

先创建一个容器而无需启动也可以,并挂载宿主机的数据目录

root@ubuntu1804:~# docker run -d --name volume-server -v haha:/usr/share/nginx/html -p 777:80 nginx
30d4fbd987a21da52ec074ee1c3691496235a14de78e6817bdc3d9a4471b8341

root@ubuntu1804:~# curl 127.0.0.1:777
nginx haha website

root@ubuntu1804:~# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES
30d4fbd987a2        nginx               "/docker-entrypoint.…"   28 seconds ago      Up 26 seconds       0.0.0.0:777->80/tcp    volume-server


6.3.3.2 启动多个数据卷容器 Client

root@ubuntu1804:~# docker run -d --name client1 --volumes-from volume-server -p 8080:80 nginx
b6b7db3283a380dc45d99e0524ef742e6ff74eab10cdb9e893da852f7dfcd53c

root@ubuntu1804:~# docker run -d --name client2 --volumes-from volume-server -p 8081:80 nginx
972bea7ded8df22f8f0b57f45b979203fd798b85cdf4183bff192d917be12a0e

root@ubuntu1804:~# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES
972bea7ded8d        nginx               "/docker-entrypoint.…"   3 seconds ago       Up 2 seconds        0.0.0.0:8081->80/tcp   client2
b6b7db3283a3        nginx               "/docker-entrypoint.…"   15 seconds ago      Up 14 seconds       0.0.0.0:8080->80/tcp   client1
30d4fbd987a2        nginx               "/docker-entrypoint.…"   5 minutes ago       Up 5 minutes        0.0.0.0:777->80/tcp    volume-server


6.3.3.3 验证访问

root@ubuntu1804:~# curl 127.0.0.1:8080
nginx haha website
root@ubuntu1804:~# curl 127.0.0.1:8081
nginx haha website

6.3.3.4 进入容器测试读写

读写权限依赖于源数据卷Server容器

#进入 Server 容器修改数据
root@ubuntu1804:~# docker exec -it volume-server bash
root@30d4fbd987a2:/# cat /usr/share/nginx/html/index.html 
nginx haha website

root@30d4fbd987a2:/# echo test v1 > /usr/share/nginx/html/index.html 

#验证访问
root@ubuntu1804:~# cat /var/lib/docker/volumes/haha/_data/index.html 
test v1
root@ubuntu1804:~# curl 127.0.0.1:777
test v1
root@ubuntu1804:~# curl 127.0.0.1:8080
test v1
root@ubuntu1804:~# curl 127.0.0.1:8081
test v1


#进入 Client 容器修改数据
root@ubuntu1804:~# docker exec -it client1 bash
root@b6b7db3283a3:/# echo test v2 > /usr/share/nginx/html/index.html 

root@ubuntu1804:~# curl 127.0.0.1:777
test v2
root@ubuntu1804:~# curl 127.0.0.1:8081
test v2
root@ubuntu1804:~# curl 127.0.0.1:8080
test v2


#在宿主机直接修改
root@ubuntu1804:~# echo test v3 > /var/lib/docker/volumes/haha/_data/index.html 

root@ubuntu1804:~# curl 127.0.0.1:777
test v3
root@ubuntu1804:~# curl 127.0.0.1:8081
test v3
root@ubuntu1804:~# curl 127.0.0.1:8080
test v3


6.3.3.5关闭卷容器Server测试能否启动新容器

关闭卷容器Server,仍然可以创建新的client容器及访问旧的client容器

root@ubuntu1804:~# docker stop volume-server 

root@ubuntu1804:~# docker run -d --name client3 --volumes-from volume-server -p 8083:80 nginx
2714751f067211b53520c36554a07a0fadb7d0d236bbdc9e485fc515e4ace7cd

root@ubuntu1804:~# curl 127.0.0.1:8083
test v3

root@ubuntu1804:~# curl 127.0.0.1:8081
test v3


6.3.3.6 删除源卷容器Server,访问client和创建新的client容器

删除数据卷容器后,旧的client 容器仍能访问,但无法再创建新的client容器

root@ubuntu1804:~# docker rm -fv volume-server 
volume-server

root@ubuntu1804:~# docker volume ls
DRIVER              VOLUME NAME
local               haha
local               lala

root@ubuntu1804:~# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES
2714751f0672        nginx               "/docker-entrypoint.…"   3 minutes ago       Up 3 minutes        0.0.0.0:8083->80/tcp   client3
972bea7ded8d        nginx               "/docker-entrypoint.…"   22 minutes ago      Up 21 minutes       0.0.0.0:8081->80/tcp   client2
b6b7db3283a3        nginx               "/docker-entrypoint.…"   22 minutes ago      Up 22 minutes       0.0.0.0:8080->80/tcp   client1

root@ubuntu1804:~# docker run -d --name client4 --volumes-from volume-server -p 8084:80 nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
Digest: sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
Status: Image is up to date for nginx:latest
docker: Error response from daemon: No such container: volume-server.
See 'docker run --help'.

root@ubuntu1804:~# curl 127.0.0.1:8081
test v3
root@ubuntu1804:~# curl 127.0.0.1:8083
test v3
root@ubuntu1804:~# curl 127.0.0.1:8084
curl: (7) Failed to connect to 127.0.0.1 port 8084: Connection refused


6.3.3.7 重新创建容器卷 Server

重新创建容器卷容器后,还可继续创建新client 容器

root@ubuntu1804:~# docker run -d --name volume-server -v haha:/usr/share/nginx/html:ro -p 777:80 nginx
ef448d452aca80d70388997df56a27875deb79fc1183778c54d730424fdb2ff8

root@ubuntu1804:~# docker run -d --name client4 --volumes-from volume-server -p 8084:80 nginx
be8e16e050dbd8ad1674f5a6d543c2e705270bf0d64e7267e6cf603c6fb6c36a

root@ubuntu1804:~# curl 127.0.0.1:8084
test v3


6.3.4 利用数据卷容器备份指定容器的数据卷实现

由于匿名数据卷在宿主机中的存储位置不确定,所以为了方便的备份匿名数据卷,可以利用数据卷容器实现数据卷的备份

#在执行备份命令容器上执行备份方式格式
docker run -it --rm --volumes-from [container name] -v $(pwd):/backup ubuntu
root@ca5bb2c1f877:/#tar cvf /backup/backup.tar [container data volume]

#说明
[container name] #表示需要备份的容器
[container data volume] #表示容器内的需要备份的数据卷对应的目录

#还原方式
docker run -it --rm --volumes-from [container name] -V $(pwd):/backup ubuntu
root@ca5bb2c1f877:/#tar xvf /backup/backup.tar -C [container data volume]

#创建需要备份的数据卷容器(容器名为date-docker,通过匿名数据卷关联/datavolumel目录)
root@ubuntu1804:~# docker run -it -v /datavolumel --name date-docker centos-base:2.0 
[root@40b19dbf0953 /]# ls
anaconda-post.log  datavolumel  etc   lib    media  opt   root  sbin  sys  usr
bin                dev          home  lib64  mnt    proc  run   srv   tmp  var
[root@40b19dbf0953 /]# touch /datavolumel/centos.txt
[root@40b19dbf0953 /]# exit
exit


#之前如果要获取匿名数据卷的宿主机目录可以通过以下方式获取然后进行查看
root@ubuntu1804:~# docker inspect -f "{{.Mounts}}" date-docker 
[{volume a214319533bff3ff5a0738dd355434c63fc4a0065892b1cf14ed4ec3dc47d9d4 /var/lib/docker/volumes/a214319533bff3ff5a0738dd355434c63fc4a0065892b1cf14ed4ec3dc47d9d4/_data /datavolumel local  true }]

root@ubuntu1804:~# docker volume ls
DRIVER              VOLUME NAME
local               a214319533bff3ff5a0738dd355434c63fc4a0065892b1cf14ed4ec3dc47d9d4

root@ubuntu1804:~# ll /var/lib/docker/volumes/a214319533bff3ff5a0738dd355434c63fc4a0065892b1cf14ed4ec3dc47d9d4/_data
total 8
drwxr-xr-x 2 root root 4096 Nov  7 10:47 ./
drwxr-xr-x 3 root root 4096 Nov  7 10:46 ../
-rw-r--r-- 1 root root    0 Nov  7 10:47 centos.txt

#现在直接通过数据卷容器创建执行备份操作容器【要进行备份的容器作为数据卷容器,生成新的容器,同时创建目录数据卷,将宿主机的/root目录与容器的/backup目录进行关联(这里的目录无所谓随便写都可以)】
root@ubuntu1804:~# docker run -it --volumes-from date-docker -v /root/:/backup/  --name  backup-server ubuntu
root@bb2cf78710d6:/# ls
backup  boot         dev  home  lib32  libx32  mnt  proc  run   srv  tmp  var
bin     datavolumel  etc  lib   lib64  media   opt  root  sbin  sys  usr
root@bb2cf78710d6:/# ls /backup/        #可以查看到宿主机的/root目录下的文件
all.tar  all1.tar  inspect  install_docker_ubuntu.sh  sx

#查看数据容器中的文件
root@bb2cf78710d6:/# ll /datavolumel/   
total 8
drwxr-xr-x 2 root root 4096 Nov  7 02:47 ./
drwxr-xr-x 1 root root 4096 Nov  7 02:50 ../
-rw-r--r-- 1 root root    0 Nov  7 02:47 centos.txt
root@bb2cf78710d6:/# ls /datavolumel/
centos.txt

#将数据容器中的文件打包到与宿主机关联的本地目录中
root@bb2cf78710d6:/# cd /datavolumel/
root@bb2cf78710d6:/datavolumel# tar cvf /backup/data.tar .
./
./centos.txt
root@bb2cf78710d6:/datavolumel# ll /backup/data.tar 
-rw-r--r-- 1 root root 10240 Nov  7 03:08 /backup/data.tar


#宿主机本地查看
root@ubuntu1804:~# ll /root/data.tar 
-rw-r--r-- 1 root root 10240 Nov  7 11:08 /root/data.tar


#删除容器的数据
root@ubuntu1804:~# docker start  -i date-docker 
[root@40b19dbf0953 /]# ll /datavolumel/
total 0
-rw-r--r-- 1 root root 0 Nov  7 10:47 centos.txt
[root@40b19dbf0953 /]# rm -rf /datavolumel/*
[root@40b19dbf0953 /]# ll /datavolumel/
total 0

#进行还原
root@ubuntu1804:~# docker run -it --volumes-from date-docker -v /root/:/backup/  --name  backup-server1 ubuntu
root@88d9591bb553:/# ll /backup/data.tar 
-rw-r--r-- 1 root root 10240 Nov  7 03:08 /backup/data.tar
root@88d9591bb553:/# tar xvf /backup/data.tar -C /datavolumel/
./
./centos.txt
root@88d9591bb553:/# 

#这是进入到容器进行解压,也可以直接在docker run中执行
root@ubuntu1804:~# docker run -it --volumes-from date-docker -v /root/:/backup/  --name  backup-server1 ubuntu tar xvf /backup/data.tar -C /datavolume1/
./
./centos.txt


#验证是否还原
root@ubuntu1804:~# docker start  -i date-docker 
[root@40b19dbf0953 /]# ll /datavolumel/
total 0
-rw-r--r-- 1 root root 0 Nov  7 10:47 centos.txt

6.3.5 数据卷容器总结

将提供卷的容器Server 删除,已经运行的容器Client依然可以使用挂载的卷,因为容器是通过挂载访问数据的,但是无法创建新的卷容器客户端,但是再把卷容器Server创建后即可正常创建卷容器Client,此方式可以用于线上共享数据目录等环境,因为即使数据卷容器被删除了,其他已经运行的容器依然可以挂载使用

由此可知, 数据卷容器的功能只是将数据挂载信息传递给了其它使用数据卷容器的容器,而数据卷容器本身并不提供数据存储功能

数据卷容器可以作为共享的方式为其他容器提供文件共享,类似于NFS共享,可以在生产中启动一个实例挂载本地的目录,然后其他的容器分别挂载此容器的目录,即可保证各容器之间的数据一致性数据卷容器的 Server 和 Client 可以不使用同一个镜像生成

7、网络管理

docker容器创建后,必不可少的要和其它主机或容器进行网络通信

官方文档:

https://docs.docker.com/network/

7.1 Docker的默认的网络通信

7.1.1 Docker安装后默认的网络设置

Docker服务安装完成之后,默认在每个宿主机会生成一个名称为docker0的网卡其IP地址都是172.17.0.1/16

root@ubuntu1804:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:42:f2:be brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.110/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe42:f2be/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:5d:53:77:f6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:5dff:fe53:77f6/64 scope link 

#查看新建容器后桥接状态(需要安装bridge-utils包)
root@ubuntu1804:/# apt -y install bridge-utils
root@ubuntu1804:/# brctl show
bridge name bridge id       STP enabled interfaces
docker0     8000.02425d5377f6   no      


7.1.2 创建容器后的网络配置

每次新建容器后

  • 宿主机多了一个虚拟网卡,和容器的网卡组合成一个网卡,比如: 137: veth8ca6d43@if136,而在容器内的网卡名为136,可以看出和宿主机的网卡之间的关联
  • 容器会自动获取一个172.17.0.0/16网段的随机地址,默认从172.17.0.2开始,第二次容器为172.17.0.3,以此类推
  • 容器获取的地址并不固定,每次容器重启,可能会发生地址变化

7.1.2.1 创建第一个容器后的网络状态

root@ubuntu1804:/# docker run -d alpine:3.11 tail -f /etc/hosts
79c125b8792376b6b16a8c70962ddf80fdf32624ed4d01b0ab406f0bae48d0b6

root@ubuntu1804:/# docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
79c125b87923        alpine:3.11         "tail -f /etc/hosts"   6 seconds ago       Up 4 seconds                            busy_lederberg

#宿主机查看网卡信息
root@ubuntu1804:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:42:f2:be brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.110/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe42:f2be/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:5d:53:77:f6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:5dff:fe53:77f6/64 scope link 
       valid_lft forever preferred_lft forever
7: veth24c8045@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 06:53:0b:99:64:6d brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::453:bff:fe99:646d/64 scope link 
       valid_lft forever preferred_lft forever
#多了一个网卡7

#进入容器查看网卡信息
root@ubuntu1804:/# docker exec -it 79c125b87923 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
#网卡6,ip地址为172.17.0.2/16


#查看新建容器后桥接状态
root@ubuntu1804:/# brctl show
bridge name bridge id       STP enabled interfaces
docker0     8000.02425d5377f6   no      veth24c8045



注意:ip -a命令可以看到eth0@if15容器与宿主的相关联,ifconfig命令看不出来
root@ubuntu1804:/# docker run -it alpine:3.11 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:c0:a8:00:03 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.3/24 brd 192.168.0.255 scope global eth0
       valid_lft forever preferred_lft forever

/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:00:03  
          inet addr:192.168.0.3  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:656 (656.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # 


7.1.2.2 创建第二个容器后面的网络状态

root@ubuntu1804:/# docker run -d -it alpine:3.11 tail -f /etc/hosts 
9464bf42ba1062b07a97f645446197d46527be0ff55da21130e317f6441423e4
root@ubuntu1804:/# docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
9464bf42ba10        alpine:3.11         "tail -f /etc/hosts"   10 seconds ago      Up 8 seconds                            naughty_borg
79c125b87923        alpine:3.11         "tail -f /etc/hosts"   14 minutes ago      Up 14 minutes                           busy_lederberg

##宿主机查看网卡信息,又增加了一个网卡15
root@ubuntu1804:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:42:f2:be brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.110/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe42:f2be/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:5d:53:77:f6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:5dff:fe53:77f6/64 scope link 
       valid_lft forever preferred_lft forever
7: veth24c8045@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 06:53:0b:99:64:6d brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::453:bff:fe99:646d/64 scope link 
       valid_lft forever preferred_lft forever
15: vethc21fd9d@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether ae:1a:0f:29:38:8e brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::ac1a:fff:fe29:388e/64 scope link 
       valid_lft forever preferred_lft forever


#进入容器查看网卡信息,为网卡14,ip为172.17.0.3
root@ubuntu1804:/# docker exec -it 9464bf42ba sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

#查看新建容器后桥接状态
root@ubuntu1804:/# brctl show
bridge name bridge id       STP enabled interfaces
docker0     8000.02425d5377f6   no      veth24c8045
                            vethc21fd9d


7.1.3 容器间的通信

7.1.3.1 同一个宿主机的不同容器可相互通信

默认情况下

  • 同一个宿主机的不同容器之间可以相互通信
    dockerd  --icc  Enable inter-container communication (default true)       #启用容器间通信(默认为true)  
    --icc=false  #此配置可以禁止同一个宿主机的容器之间通信
    
    
  • 不同宿主机之间的容器IP地址重复,默认不能相互通信
#同一个宿主机的容器之间访问(第二个容器ping第一个容器和宿主机)
root@ubuntu1804:/# docker exec -it 9464bf42ba sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.107 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.078 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.083 ms
^C
--- 172.17.0.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.078/0.089/0.107 ms

/ # ping 172.17.0.1
PING 172.17.0.1 (172.17.0.1): 56 data bytes
64 bytes from 172.17.0.1: seq=0 ttl=64 time=0.143 ms
64 bytes from 172.17.0.1: seq=1 ttl=64 time=0.077 ms
^C
--- 172.17.0.1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.077/0.110/0.143 ms

#默认情况下都可以ping通


#修改docker.service配置文件,修改默认的容器间通信设置
root@ubuntu1804:/# vim /lib/systemd/system/docker.service 
root@ubuntu1804:/# cat /lib/systemd/system/docker.service | grep ExecStart
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --icc=false

#加载服务文件,重启docker服务
root@ubuntu1804:/# systemctl daemon-reload 
root@ubuntu1804:/# systemctl restart docker

#重启容器
root@ubuntu1804:/# docker start 9464bf42ba10 
9464bf42ba10
root@ubuntu1804:/# docker start 79c125b87923 
79c125b87923

#进入容器进行ping测试,测试无法通信
root@ubuntu1804:/# docker exec -it 9464bf42ba sh
/ # hostname -i
172.17.0.2
/ # ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes

root@ubuntu1804:/# docker exec -it 79c125b87923 sh
/ # hostname -i
172.17.0.3
/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes

/ # ping 172.17.0.1
PING 172.17.0.1 (172.17.0.1): 56 data bytes
64 bytes from 172.17.0.1: seq=0 ttl=64 time=0.071 ms
64 bytes from 172.17.0.1: seq=1 ttl=64 time=0.078 ms
64 bytes from 172.17.0.1: seq=2 ttl=64 time=0.079 ms
64 bytes from 172.17.0.1: seq=3 ttl=64 time=0.076 ms
64 bytes from 172.17.0.1: seq=4 ttl=64 time=0.079 ms
64 bytes from 172.17.0.1: seq=5 ttl=64 time=0.082 ms
64 bytes from 172.17.0.1: seq=6 ttl=64 time=0.077 ms

#关闭容器间通信后,同义宿主机上的容器间无法通信,但是容器与宿主机之间的通信不受影响

7.1.4 修改默认网络设置

新建容器默认使用docker0的网络配置,可以修改默认指向自定义的网桥网络

7.1.4.1查看默认网络

root@ubuntu1804:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:42:f2:be brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.110/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe42:f2be/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:5d:53:77:f6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:5dff:fe53:77f6/64 scope link 
       valid_lft forever preferred_lft forever



7.1.4.2设置新网卡

#添加网卡(需要安装bridge-utils包)
root@ubuntu1804:/# apt -y install bridge-utils
root@ubuntu1804:/# brctl addbr br0                  #添加网卡br0
root@ubuntu1804:/# ip a a 192.168.0.1/24 dev br0    #为网卡br0配置ip(当前有效,重启后失效)

#查看桥接状态
root@ubuntu1804:/# brctl show
bridge name bridge id       STP enabled interfaces
br0     8000.000000000000   no      
docker0     8000.02425d5377f6   no          

#查看宿主机网卡信息
root@ubuntu1804:/# ip a 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:42:f2:be brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.110/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe42:f2be/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:5d:53:77:f6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:5dff:fe53:77f6/64 scope link 
       valid_lft forever preferred_lft forever
20: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ee:5e:ec:bb:26:92 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 scope global br0
       valid_lft forever preferred_lft forever


7.1.4.3将网卡br0设为容器默认网卡

#查看与网卡相关的选项
root@ubuntu1804:/# dockerd --help

Usage:  dockerd [OPTIONS]

A self-sufficient runtime for containers.

Options:
      --bip string                              Specify network bridge IP               #指定网桥IP
  -b, --bridge string                           Attach containers to a network bridge   #将容器与网卡绑定


#修改配置docker服务文件,将容器与网卡br0绑定
root@ubuntu1804:/# vim /lib/systemd/system/docker.service 
root@ubuntu1804:/# cat /lib/systemd/system/docker.service | grep -i 'Execstart'
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -b br0


#加载服务文件重启服务
root@ubuntu1804:/# systemctl daemon-reload 
root@ubuntu1804:/# systemctl restart docker.service 

#查看修改的服务进程是否启用
root@ubuntu1804:/# ps -aux | grep docker
root       4781  0.5  8.6 831096 84796 ?        Ssl  16:43   0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -b br0
root       4951  0.0  0.1  14428  1036 pts/1    S+   16:44   0:00 grep --color=auto docker


7.1.4.4创建容器查看地址信息

root@ubuntu1804:/# docker run -it -d nginx-1-16:2.0 
8d8c40276be6f5bab1d2e47540f6fadd731fb5c9f0b8d2d30fd3cdbe4209f3a3
root@ubuntu1804:/# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
8d8c40276be6        nginx-1-16:2.0      "/app/nginx/sbin/ngi…"   6 seconds ago       Up 4 seconds        80/tcp, 443/tcp     practical_pasteur

root@ubuntu1804:/# docker inspect -f "{{.NetworkSettings.IPAddress}}" 8d8c40276be6
192.168.0.2

root@ubuntu1804:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:42:f2:be brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.110/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe42:f2be/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:5d:53:77:f6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:5dff:fe53:77f6/64 scope link 
       valid_lft forever preferred_lft forever
20: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 42:c7:12:7b:d6:dd brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::ec5e:ecff:febb:2692/64 scope link 
       valid_lft forever preferred_lft forever
22: veth6c59e14@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default 
    link/ether 42:c7:12:7b:d6:dd brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::40c7:12ff:fe7b:d6dd/64 scope link 
       valid_lft forever preferred_lft forever

root@ubuntu1804:/# brctl show
bridge name bridge id       STP enabled interfaces
br0     8000.42c7127bd6dd   no      veth6c59e14
docker0     8000.02425d5377f6   no      

7.2 容器名称互联

新建容器时,docker会自动分配容器名称,容器ID和IP地址,导致容器名称,容器ID和IP都不固定,那么如何区分不同的容器,实现和确定目标容器的通信呢?解决方案是给容器起个固定的名称,容器之间通过固定名称实现确定目标的通信

有两种固定名称:

  • 容器名称
  • 容器名称的别名

注意: 两种方式都最少需要两个容器才能实现

7.2.1 通过容器名称互联

7.2.1.1 容器名称介绍

即在同一个宿主机上的容器之间可以通过自定义的容器名称相互访问,比如: 一个业务前端静态页面是使用nginx,动态页面使用的是tomcat,另外还需要负载均衡调度器,如: haproxy 对请求调度至nginx和tomcat的容器,由于容器在启动的时候其内部IP地址是DHCP 随机分配的,而给容器起个固定的名称,则是相对比较固定的,因此比较适用于此场景

注意: 如果被引用的容器地址变化,必须重启当前容器才能生效

7.2.1.2 容器名称实现

docker run 创建容器,可使用–link选项实现容器名称的引用

--link list           #Add link to another container
格式: 
docker run --name <容器名称> #先创建指定名称的容器
docker run --link <目标通信的容器ID或容器名称>   #再创建容器时引用上面容器的名称

7.2.1.3 实战案例1: 使用容器名称进行容器间通信

7.2.1.3.1先创建第一个指定容器名称的容器
root@ubuntu1804:/# docker run -it --name server1 --rm alpine:3.11 sh
/ # cat /etc/hosts
127.0.0.1   localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.0.3 eae3a5d127fd
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
23: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:c0:a8:00:03 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.3/24 brd 192.168.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping 192.168.0.3        #ping自己的ip
PING 192.168.0.3 (192.168.0.3): 56 data bytes
64 bytes from 192.168.0.3: seq=0 ttl=64 time=0.034 ms
64 bytes from 192.168.0.3: seq=1 ttl=64 time=0.056 ms
^C
--- 192.168.0.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.034/0.045/0.056 ms
/ # ping server1            #ping自己的容器名称
ping: bad address 'server1'
/ # ping eae3a5d127fd       #ping自己的容器id
PING eae3a5d127fd (192.168.0.3): 56 data bytes
64 bytes from 192.168.0.3: seq=0 ttl=64 time=0.030 ms
64 bytes from 192.168.0.3: seq=1 ttl=64 time=0.058 ms
^C
--- eae3a5d127fd ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.030/0.044/0.058 ms



7.2.1.3.2新建第二个容器时引用第一个容器的名称

会自动将第一个主机的名称加入/etc/hosts文件,从而可以利用第一个容器名称进行访问

root@ubuntu1804:/# docker run -it --rm --name server2 --link server1 alpine:3.11 
/ # env
HOSTNAME=1f84fcc2e5aa
SHLVL=1
HOME=/root
SERVER1_NAME=/server2/server1
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
/ # 
/ # cat /etc/hosts 
127.0.0.1   localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.0.3 server1 eae3a5d127fd
192.168.0.4 1f84fcc2e5aa
/ # ping server1
PING server1 (192.168.0.3): 56 data bytes
64 bytes from 192.168.0.3: seq=0 ttl=64 time=0.139 ms
64 bytes from 192.168.0.3: seq=1 ttl=64 time=0.083 ms
^C
--- server1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.083/0.111/0.139 ms

/ # ping server2
ping: bad address 'server2'


root@ubuntu1804:/# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
1f84fcc2e5aa        alpine:3.11         "/bin/sh"                3 minutes ago       Up 3 minutes                            server2
eae3a5d127fd        alpine:3.11         "sh"                     8 minutes ago       Up 8 minutes                            server1


7.2.2 通过自定义容器别名互联

7.2.2.1 容器别名介绍

自定义的容器名称可能后期会发生变化,那么一旦名称发生变化,容器内程序之间也必须要随之发生变化,比如:程序通过固定的容器名称进行服务调用,但是容器名称发生变化之后再使用之前的名称肯定是无法成功调用,每次都进行更改的话又比较麻烦,因此可以使用自定义别名的方式解决,即容器名称可以随意更改,只要不更改别名即可

7.2.2.2 容器别名实现

命令格式:

docker run --name <容器名称> #先创建指定名称的容器
docker run -d --name 容器名称 --link <目标容器名称>:<容器别名> #给上面创建的容器起别名,来创建新容器

7.2.2.3 实战案例: 使用容器别名

#创建第三个容器,引用前面创建的容器,并起别名
root@ubuntu1804:~# docker run -it -d --name server1 alpine:3.11 tail -f /etc/hosts
599f457f207d7d1f87c0a5a0f4ecf12245e12918f4b9a9ec599bb3d4e7c62c53

root@ubuntu1804:~# docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
599f457f207d        alpine:3.11         "tail -f /etc/hosts"   4 seconds ago       Up 3 seconds                            server1

root@ubuntu1804:~# docker run -it --name server3 --link server1:server1-alias1 alpine:3.11 sh
/ # env
HOSTNAME=1a30d1445ac7
SHLVL=1
HOME=/root
SERVER1_ALIAS1_NAME=/server3/server1-alias1
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
/ # cat /etc/hosts
127.0.0.1   localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2  server1-alias1 599f457f207d server1
172.17.0.3  1a30d1445ac7
/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.087 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.086 ms
^C
--- 172.17.0.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.086/0.086/0.087 ms
/ # ping server1
PING server1 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.086 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.080 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.086 ms
64 bytes from 172.17.0.2: seq=3 ttl=64 time=0.083 ms
^C
--- server1 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.080/0.083/0.086 ms
/ # ping server1-alias1
PING server1-alias1 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.054 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.082 ms
^C
--- server1-alias1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.054/0.068/0.082 ms
/ # 


#创建第四个容器,引用前面创建的容器,并起多个别名
root@ubuntu1804:~# docker run -it --name server4 --link server1:"server1-alias1 server1-alias2" alpine:3.11 sh
/ # cat /etc/hosts 
127.0.0.1   localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2  server1-alias1 server1-alias2 599f457f207d server1
172.17.0.3  a7d3ba5711fd

/ # ping server1-alias1
PING server1-alias1 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.080 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.087 ms
^C
--- server1-alias1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.080/0.083/0.087 ms
/ # ping server1-alias2
PING server1-alias2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.067 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.081 ms
^C
--- server1-alias2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.067/0.074/0.081 ms
/ # 


说明:
如果使用多个别名,那么需要使用双引号括起来,但是使用了双引号之后镜像名无法使用tab键补全,不是语法错误
root@ubuntu1804:~# docker run -it --name server5 --link server1:server1-alias1 server1-alias2 alpine:3.11 sh
Unable to find image 'server1-alias2:latest' locally


7.3 ★★docker网络连接模式★★

7.3.1 网络模式介绍

Docker 的网络支持5种网络模式:

  • none
  • bridge
  • host
  • container
  • network-name
#查看默认的网络模式
root@ubuntu1804:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
c0286106d022        bridge              bridge              local
f0e32533201f        host                host                local
50ba4fee40db        none                null                local


7.3.2 网络模式指定

默认新建的容器使用Bridge模式,创建容器时,docker run 命令使用以下选项指定网络模式

格式

docker run --network <mode>
docker run --net=<mode>

<mode>: 可是以下值
none
bridge
host
container:<容器名或容器ID>
<自定义网络名称>

7.3.3 bridge网络模式

7.3.3.1 bridge 网络模式架构

本模式是docker的默认模式,即不指定任何模式就是bridge模式,也是使用比较多的模式,此模式创建的容器会为每一个容器分配自己的网络 IP 等信息,并将容器连接到一个虚拟网桥与外界通信

可以和外部网络之间进行通信,通过SNAT访问外网,使用DNAT可以让容器被外部主机访问,所以此模式也称为NAT模式

注意:此模式宿主机需要启动ip_forward功能(默认安装docker的时候就会自动开启)

bridge网络模式特点

  • 网络资源隔离: 不同宿主机的容器无法直接通信,各自使用独立网络
  • 无需手动配置: 容器默认自动获取172.17.0.0/16的IP地址,此地址可以修改
  • 可访问外网: 利用宿主机的物理网卡,SNAT连接外网
  • 外部主机无法直接访问容器: 可以通过配置DNAT接受外网的访问
  • 性能较低: 因为可通过NAT,网络转换带来更大的损耗
  • 端口管理繁琐: 每个容器必须手动指定唯一的端口,容易产生端口冲容

7.2.3.2 bridge 模式的默认设置

#查看bridge模式的详细信息,可以看到哪些容器(server1)使用了bridge模式
root@ubuntu1804:~# docker network inspect bridge 
[
    {
        "Name": "bridge",
        "Id": "c0286106d0224ca8040d8ddabeae2117b9010f011da60dbb67c4d7e599463694",
        "Created": "2021-11-08T07:45:11.056778301+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "599f457f207d7d1f87c0a5a0f4ecf12245e12918f4b9a9ec599bb3d4e7c62c53": {
                "Name": "server1",
                "EndpointID": "f0b877a38d16cba9149bfac675d72e9f4cd34c448b7815f4b3bab49e7ebebf03",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]


#宿主机的网络状态
#安装docker后.默认启用ip_forward
root@ubuntu1804:~# cat /proc/sys/net/ipv4/ip_forward
1

root@ubuntu1804:~# iptables -vnL -t nat
Chain PREROUTING (policy ACCEPT 10 packets, 1072 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    1    76 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 3 packets, 534 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 17 packets, 1345 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 22 packets, 1765 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    2   118 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0    

#通过宿主机的物理网卡利用SNAT访问外部网络
root@ubuntu1804:~# docker exec -it 599f457f207d sh
/ # ping 10.0.0.10
PING 10.0.0.10 (10.0.0.10): 56 data bytes
64 bytes from 10.0.0.10: seq=0 ttl=63 time=0.958 ms
64 bytes from 10.0.0.10: seq=1 ttl=63 time=0.699 ms
64 bytes from 10.0.0.10: seq=2 ttl=63 time=0.319 ms
64 bytes from 10.0.0.10: seq=3 ttl=63 time=0.489 ms
^C
--- 10.0.0.10 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.319/0.616/0.958 ms

/ # ping www.baidu.com
PING www.baidu.com (14.215.177.39): 56 data bytes
64 bytes from 14.215.177.39: seq=0 ttl=127 time=37.936 ms
64 bytes from 14.215.177.39: seq=1 ttl=127 time=37.217 ms
^C
--- www.baidu.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 37.217/37.576/37.936 ms

/ # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.17.0.1      0.0.0.0         UG    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0

7.2.3.3 修改默认的 bridge 模式网络配置

#查看默认的网段
root@ubuntu1804:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:42:f2:be brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.110/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe42:f2be/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:24:f1:97:87 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:24ff:fef1:9787/64 scope link 
       valid_lft forever preferred_lft forever


#修改bridge模式默认的网段方法1(7.1.4.3将网卡br0设为容器默认网卡,这里介绍的一种添加新的网卡,一种修改默认网卡的ip之前使用了添加新网卡这里使用修改默认ip)
root@ubuntu1804:/# vim /lib/systemd/system/docker.service 
root@ubuntu1804:/# cat /lib/systemd/system/docker.service | grep -i execstart
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --bip=192.168.0.1/24 


root@ubuntu1804:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:42:f2:be brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.110/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe42:f2be/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:24:f1:97:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 brd 192.168.0.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:24ff:fef1:9787/64 scope link 
       valid_lft forever preferred_lft forever

#查看bridge模式的地址信息
root@ubuntu1804:/# docker network inspect -f "{{.IPAM.Config}}" bridge 
[{192.168.0.1/24  192.168.0.1 map[]}]


#修改bridge网络配置方法2
[root@ubuntu1804 ~]#vim /etc/docker/daemon.json
{
 "hosts": ["tcp://0.0.0.0:2375", "fd://"],
 "bip": "192.168.100.100/24",     #分配docker0网卡的IP,24是容器IP的netmask
 "fixed-cidr": "192.168.100.128/26", #分配容器IP范围,26不是容器IP的子网掩码,只表示地址范围
 "fixed-cidr-v6": "2001:db8::/64",
 "mtu": 1500,
 "default-gateway": "192.168.100.200",  #网关必须和bip在同一个网段
 "default-gateway-v6": "2001:db8:abcd::89",
 "dns": [ "1.1.1.1", "8.8.8.8"]
}

[root@ubuntu1804 ~]#systemctl restart docker
[root@ubuntu1804 ~]#ip a show docker0
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default
 link/ether 02:42:23:be:97:75 brd ff:ff:ff:ff:ff:ff
 inet 192.168.100.100/24 brd 192.168.100.255 scope global docker0
   valid_lft forever preferred_lft forever
 inet6 fe80::42:23ff:febe:9775/64 scope link
   valid_lft forever preferred_lft forever

[root@ubuntu1804 ~]#docker run -it --name b1 busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
36: eth0@if37: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
 link/ether 02:42:c0:a8:64:80 brd ff:ff:ff:ff:ff:ff
 inet 192.168.100.128/24 brd 192.168.100.255 scope global eth0
   valid_lft forever preferred_lft forever
/ # cat /etc/resolv.conf
search magedu.com magedu.org
nameserver 1.1.1.1
nameserver 8.8.8.8
/ # route -n
Kernel IP routing table
Destination   Gateway     Genmask     Flags Metric Ref  Use Iface
0.0.0.0     192.168.100.200 0.0.0.0     UG   0    0     0 eth0
192.168.100.0  0.0.0.0     255.255.255.0  U   0    0     0 eth0


[root@ubuntu1804 ~]#docker network inspect bridge
[
 {
    "Name": "bridge",
    "Id":
"381bc2df514b0901e2a7570708aa93a3af05f298f27d4d077b52a8b324fad66c",
    "Created": "2020-07-27T21:58:31.419420569+08:00",
    "Scope": "local",
    "Driver": "bridge",
    "EnableIPv6": false,
    "IPAM": {
      "Driver": "default",
      "Options": null,
      "Config": [
       {
          "Subnet": "192.168.100.0/24",
          "IPRange": "192.168.100.128/26",
          "Gateway": "192.168.100.100",
          "AuxiliaryAddresses": {
            "DefaultGatewayIPv4": "192.168.100.200"
         }
       },
       {
          "Subnet": "2001:db8::/64",
          "AuxiliaryAddresses": {
            "DefaultGatewayIPv6": "2001:db8:abcd::89"
         }
       }
     ]
   },
    "Internal": false,
    "Attachable": false,
    "Ingress": false,
    "ConfigFrom": {
      "Network": ""
   },
    "ConfigOnly": false,
    "Containers": {
      "2f16c9f5efc1eefe766f6ae6ba7fcfa3434e8f4876ecdcf48c3343acd9e45b2d":
{
        "Name": "b1",
        "EndpointID":
"0a0fdf3d786310dca53e04f0734b9f0eeaa79aac147c7a3c69ac8d04444570f3",
        "MacAddress": "02:42:c0:a8:64:80",
        "IPv4Address": "192.168.100.128/24",
        "IPv6Address": ""
     }
   },
    "Options": {
      "com.docker.network.bridge.default_bridge": "true",
      "com.docker.network.bridge.enable_icc": "true",
      "com.docker.network.bridge.enable_ip_masquerade": "true",
      "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
      "com.docker.network.bridge.name": "docker0",
      "com.docker.network.driver.mtu": "1500"
   },
    "Labels": {}
 }
]

7.3.4 Host 模式

如果指定host模式启动的容器,那么新创建的容器不会创建自己的虚拟网卡,而是直接使用宿主机的网卡和IP地址,因此在容器里面查看到的IP信息就是宿主机的信息,访问容器的时候直接使用宿主机IP+容器端口即可,不过容器内除网络以外的其它资源,如: 文件系统、系统进程等仍然和宿主机保持隔离

此模式由于直接使用宿主机的网络无需转换,网络性能最高,但是各容器内使用的端口不能相同,适用于运行容器端口比较固定的业务

Host 网络模式特点:

  • 使用参数 –network host 指定
  • 共享宿主机网络
  • 网络性能无损耗
  • 网络故障排除相对简单
  • 各容器网络无隔离
  • 网络资源无法分别统计
  • 端口管理困难: 容易产生端口冲突
  • 不支持端口映射
#查看宿主机的网络设置
root@ubuntu1804:/# ip a show docker0
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:24:f1:97:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 brd 192.168.0.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:24ff:fef1:9787/64 scope link 
       valid_lft forever preferred_lft forever
root@ubuntu1804:/# 

root@ubuntu1804:/# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0


#打开容器前确认宿主机的80/tcp端口没有打开
root@ubuntu1804:/# ss -ntl | grep 80


#创建host模式的容器
root@ubuntu1804:/# docker run -d --network host --name nginx1 nginx-1-16:2.0 
43d11fb9061632637b1f7c430dfd9de3c5b7611789d54aca85007b3079a3272f

#创建容器后,宿主机的80/tcp端口打开
root@ubuntu1804:/# ss -ntl | grep 80
LISTEN   0         128                 0.0.0.0:80               0.0.0.0:*       

#进入容器
root@ubuntu1804:/# docker exec -it nginx1 bash


#进入容器后仍显示宿主机的主机名提示符信息
[root@ubuntu1804 /]# hostname
ubuntu1804
[root@ubuntu1804 /]# ip a  
bash: ip: command not found
[root@ubuntu1804 /]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.0.1  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::42:24ff:fef1:9787  prefixlen 64  scopeid 0x20<link>
        ether 02:42:24:f1:97:87  txqueuelen 0  (Ethernet)
        RX packets 28  bytes 1537 (1.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 39  bytes 8417 (8.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.110  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe42:f2be  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:42:f2:be  txqueuelen 1000  (Ethernet)
        RX packets 8946  bytes 4226414 (4.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4319  bytes 392026 (382.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 126  bytes 10794 (10.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 126  bytes 10794 (10.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


[root@ubuntu1804 /]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0

[root@ubuntu1804 /]#  curl 10.0.0.10
this is 10.0.0.10

#查看远程主机的访问日志
[root@Centos7 ~]# tail -n1 /var/log/httpd/access_log 
10.0.0.110 - - [08/Nov/2021:18:12:41 +0800] "GET / HTTP/1.1" 403 4897 "-" "curl/7.29.0"


#远程主机可以访问容器的web服务
[root@Centos7 ~]# curl 10.0.0.110
this is host


#host模式下端口映射无法实现
root@ubuntu1804:/# ss -ntl | grep 81
root@ubuntu1804:/# docker run -d --network host --name nginx2 -p 81:80 nginx-1-16:2.0 
WARNING: Published ports are discarded when using host network mode
dc357c11545a9d5b5e5b69b2fc7437ab88a28a9d3753b91cf515b9199c6cf90d

root@ubuntu1804:/# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                     PORTS               NAMES
dc357c11545a        nginx-1-16:2.0      "/app/nginx/sbin/ngi…"   8 seconds ago       Exited (1) 4 seconds ago                       nginx2
43d11fb90616        nginx-1-16:2.0      "/app/nginx/sbin/ngi…"   12 minutes ago      Up 12 minutes                                  nginx1


#对比前面host模式的容器和bridge模式的端口映射
#host模式下的端口映射(容器无法正常运行)
root@ubuntu1804:/# docker run -d --network host --name nginx2 -p 81:80 nginx-1-16:2.0 
WARNING: Published ports are discarded when using host network mode
dc357c11545a9d5b5e5b69b2fc7437ab88a28a9d3753b91cf515b9199c6cf90d

root@ubuntu1804:/# docker port nginx2
root@ubuntu1804:/# 

#bridge模式下的端口映射
root@ubuntu1804:/# docker run -d --network bridge --name nginx4 -p 8080:80 nginx-1-16:2.0 
b09ba8d2b9e86060825dc8c84ab24d4e21cdc0aa0b0a3cf5be2c08a707b89088
root@ubuntu1804:/# docker port nginx4
80/tcp -> 0.0.0.0:8080


7.3.5 none 模式

在使用none 模式后,Docker 容器不会进行任何网络配置,没有网卡、没有IP也没有路由,因此默认无法与外界通信,需要手动添加网卡配置IP等,所以极少使用

none模式特点

  • 使用参数 –network none 指定
  • 默认无网络功能,无法和外部通信
#启动none模式的容器
root@ubuntu1804:/# docker run -d --network none -p 8001:80 --name none1 nginx-1-16:2.0 
cd87f4ce2e9399d36a60e8702811ef20ed95f3dbf8ad7221306f58a8ff8c112e

root@ubuntu1804:/# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
cd87f4ce2e93        nginx-1-16:2.0      "/app/nginx/sbin/ngi…"   3 seconds ago       Up 3 seconds                            none1

root@ubuntu1804:/# docker port none1 
root@ubuntu1804:/# docker exec -it none1 bash
[root@cd87f4ce2e93 /]# ip a
bash: ip: command not found
[root@cd87f4ce2e93 /]# ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@cd87f4ce2e93 /]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
[root@cd87f4ce2e93 /]# ss -ntl
bash: ss: command not found
[root@cd87f4ce2e93 /]# netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN     
[root@cd87f4ce2e93 /]# ping www.baidu.com
ping: www.baidu.com: Name or service not known
[root@cd87f4ce2e93 /]# ping 192.168.0.1
connect: Network is unreachable
[root@cd87f4ce2e93 /]# 

#地址真就比脸都干净啥都没有

7.3.6 Container 模式

使用此模式创建的容器需指定和一个已经存在的容器共享一个网络,而不是和宿主机共享网,新创建的容器不会创建自己的网卡也不会配置自己的IP,而是和一个被指定的已经存在的容器共享IP和端口范围,因此这个容器的端口不能和被指定容器的端口冲突,除了网络之外的文件系统、进程信息等仍然保持相互隔离,两个容器的进程可以通过lo网卡进行通信

Container 模式特点

  • 使用参数 –-network container:名称或ID 指定
  • 与宿主机网络空间隔离
  • 空器间共享网络空间
  • 适合频繁的容器间的网络通信
  • 直接使用对方的网络,较少使用
#创建第一个容器
root@ubuntu1804:/# docker run -it --name cont1 -p 80:80 alpine:3.11 sh
/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:00:02  
          inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:696 (696.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
/ # 


#在另一个终端执行下面操作
root@ubuntu1804:/# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                NAMES
ba6718cd96c4        alpine:3.11         "sh"                21 seconds ago      Up 20 seconds       0.0.0.0:80->80/tcp   cont1
root@ubuntu1804:/# docker port cont1 
80/tcp -> 0.0.0.0:80

#无法访问web服务
root@ubuntu1804:/# curl 127.0.0.1/app/
curl: (56) Recv failure: Connection reset by peer
root@ubuntu1804:/# 


#创建第二个容器,基于第一个容器的container的网络模式
root@ubuntu1804:/# docker run -d --name cont2 --network container:cont1 nginx-1-16:2.0 
5206606de8e51bf83b8fe17256d1708bcc2fa03178a5b70ca0ccc2c46196dbd1


#可以访问web服务
root@ubuntu1804:/# curl 127.0.0.1/app/
hello nginx


#进入第二个容器,查看网络信息
root@ubuntu1804:/# docker exec -it cont2 bash
[root@ba6718cd96c4 /]# ip a
bash: ip: command not found
[root@ba6718cd96c4 /]# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.2  netmask 255.255.255.0  broadcast 192.168.0.255
        ether 02:42:c0:a8:00:02  txqueuelen 0  (Ethernet)
        RX packets 26  bytes 1905 (1.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10  bytes 807 (807.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ba6718cd96c4 /]# netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN     
[root@ba6718cd96c4 /]# ping www.baidu.com
PING www.wshifen.com (103.235.46.39) 56(84) bytes of data.
64 bytes from 103.235.46.39 (103.235.46.39): icmp_seq=1 ttl=127 time=245 ms
64 bytes from 103.235.46.39 (103.235.46.39): icmp_seq=2 ttl=127 time=244 ms
^C
--- www.wshifen.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 244.036/244.559/245.082/0.523 ms
[root@ba6718cd96c4 /]# 



7.3.7 自定义网络模式

除了以上的网络模式,也可以自定义网络,使用自定义的网段地址,网关等信息

注意: 自定义网络内的容器可以直接通过容器名进行相互的访问,而无需使用 –link

可以使用自定义网络模式,实现不同集群应用的独立网络管理,而互不影响,而且在网一个网络内,可以直接利用容器名相互访问非常便利

7.3.7.1 自定义网络实现

命令帮助:

root@ubuntu1804:/# docker network --help 

Usage:  docker network COMMAND

Manage networks

Commands:
  connect     Connect a container to a network
  create      Create a network
  disconnect  Disconnect a container from a network
  inspect     Display detailed information on one or more networks
  ls          List networks
  prune       Remove all unused networks
  rm          Remove one or more networks

Run 'docker network COMMAND --help' for more information on a command.

--------------------------------------------------------------------------------------------------------------------------------
root@ubuntu1804:/# docker network create --help

Usage:  docker network create [OPTIONS] NETWORK

Create a network

Options:
      --attachable           Enable manual container attachment
      --aux-address map      Auxiliary IPv4 or IPv6 addresses used by Network driver
                             (default map[])
      --config-from string   The network from which copying the configuration
      --config-only          Create a configuration only network
  -d, --driver string        Driver to manage the Network (default "bridge")
      --gateway strings      IPv4 or IPv6 Gateway for the master subnet
      --ingress              Create swarm routing-mesh network
      --internal             Restrict external access to the network
      --ip-range strings     Allocate container ip from a sub-range
      --ipam-driver string   IP Address Management Driver (default "default")
      --ipam-opt map         Set IPAM driver specific options (default map[])
      --ipv6                 Enable IPv6 networking
      --label list           Set metadata on a network
  -o, --opt map              Set driver specific options (default map[])
      --scope string         Control the network's scope
      --subnet strings       Subnet in CIDR format that represents a network segment


创建自定义网络:

docker network create -d <mode> --subnet <CIDR> --gateway <网关> <自定义网络名称>

#注意mode不支持host和none

查看自定义网络信息

docker network inspect <自定义网络名称或网络ID>

引用自定议网络

docker run --network <自定义网络名称> <镜像名称>

删除自定义网络

doccker network rm <自定义网络名称或网络ID>

7.3.7.2 实战案例: 自定义网络

7.3.7.2.1 创建自定义的网络
root@ubuntu1804:/# docker network create -d bridge --subnet 192.200.0.0/16 --gateway 192.200.0.1 test1
4816db734e1fe7db1c1ac6e82ba2ebe8659d22769ca8d1d534d591f1eb4dd59a
root@ubuntu1804:/# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
root@ubuntu1804:/# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

root@ubuntu1804:/# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
701d8f336085        bridge              bridge              local
f0e32533201f        host                host                local
50ba4fee40db        none                null                local
4816db734e1f        test1               bridge              local

root@ubuntu1804:/# docker network inspect test1 
[
    {
        "Name": "test1",
        "Id": "4816db734e1fe7db1c1ac6e82ba2ebe8659d22769ca8d1d534d591f1eb4dd59a",
        "Created": "2021-11-08T14:08:01.389415754+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.200.0.0/16",
                    "Gateway": "192.200.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

root@ubuntu1804:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:42:f2:be brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.110/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe42:f2be/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:24:f1:97:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 brd 192.168.0.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:24ff:fef1:9787/64 scope link 
       valid_lft forever preferred_lft forever
18: br-4816db734e1f: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default   #新增加了一个虚拟网卡
    link/ether 02:42:35:4a:06:82 brd ff:ff:ff:ff:ff:ff
    inet 192.200.0.1/16 brd 192.200.255.255 scope global br-4816db734e1f
       valid_lft forever preferred_lft forever


root@ubuntu1804:/# brctl show
bridge name bridge id       STP enabled interfaces
br-4816db734e1f     8000.0242354a0682   no      #新增加了一个网桥
docker0     8000.024224f19787   no      
root@ubuntu1804:/# 
root@ubuntu1804:/# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0
192.200.0.0     0.0.0.0         255.255.0.0     U     0      0        0 br-4816db734e1f
root@ubuntu1804:/# 


7.3.7.2.2 利用自定义的网络创建容器
root@ubuntu1804:/# docker run -it --rm --network test1 alpine:3.11 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
19: eth0@if20: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:c0:c8:00:02 brd ff:ff:ff:ff:ff:ff
    inet 192.200.0.2/16 brd 192.200.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.200.0.1     0.0.0.0         UG    0      0        0 eth0
192.200.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0
/ # 
/ # cat /etc/resolv.conf 
nameserver 127.0.0.11
options ndots:0
/ # 
/ # ping www.baidu.com
PING www.baidu.com (14.215.177.39): 56 data bytes
64 bytes from 14.215.177.39: seq=0 ttl=127 time=36.657 ms
64 bytes from 14.215.177.39: seq=1 ttl=127 time=36.757 ms
^C
--- www.baidu.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 36.657/36.707/36.757 ms


#再开一个新终端窗口查看网络
root@ubuntu1804:/# docker inspect test1 
[
    {
        "Name": "test1",
        "Id": "4816db734e1fe7db1c1ac6e82ba2ebe8659d22769ca8d1d534d591f1eb4dd59a",
        "Created": "2021-11-08T14:08:01.389415754+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.200.0.0/16",
                    "Gateway": "192.200.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
#出现此网络中容器的网络信息
        "Containers": {
            "41f7497d8aedeb7b00473f1004d89bf2715f167a2f6e23b9f31a7982ba0298ef": {
                "Name": "optimistic_williamson",
                "EndpointID": "23545b416ce5838f8c8c2497515e31286a7e3d2a6d2872ec65847f3a8b09e7e1",
                "MacAddress": "02:42:c0:c8:00:02",
                "IPv4Address": "192.200.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]


7.3.7.3 实战案例: 自定义网络中的容器之间通信

#容器一就是上面案例的容器

#新增一个容器
root@ubuntu1804:/# docker run -it --rm --network test1 --name haha alpine:3.11 sh
/ # 
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
21: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:c0:c8:00:03 brd ff:ff:ff:ff:ff:ff
    inet 192.200.0.3/16 brd 192.200.255.255 scope global eth0
       valid_lft forever preferred_lft forever


#两个容器都没有对方的ip信息
容器一
/ # cat /etc/hosts
127.0.0.1   localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.200.0.2 41f7497d8aed

容器二
/ # cat /etc/hosts
127.0.0.1   localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.200.0.3 0b5c32c4349b

#但是在容器一上ping容器二的容器名和ip都可以ping通
/ # ping haha
PING haha (192.200.0.3): 56 data bytes
64 bytes from 192.200.0.3: seq=0 ttl=64 time=0.094 ms
64 bytes from 192.200.0.3: seq=1 ttl=64 time=0.083 ms
^C
--- haha ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.083/0.088/0.094 ms

/ # ping 192.200.0.3
PING 192.200.0.3 (192.200.0.3): 56 data bytes
64 bytes from 192.200.0.3: seq=0 ttl=64 time=0.120 ms
64 bytes from 192.200.0.3: seq=1 ttl=64 time=0.105 ms
^C
--- 192.200.0.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.105/0.112/0.120 ms



结论: 自定义网络中的容器之间可以直接利用容器名进行通信

7.3.7.4 实战案例: 利用自定义网络实现 Redis Cluster(redis还没学之后再看)

7.3.8 同一个宿主机之间不同网络的容器通信

开两个容器,一个使用自定义网络容器,一个使用默认brideg网络的容器,默认因iptables规则导致无法通信

#自定义一个网络
root@ubuntu1804:/# docker network create -d bridge --subnet 192.200.0.0/16 --gateway 192.200.0.1 test1
4816db734e1fe7db1c1ac6e82ba2ebe8659d22769ca8d1d534d591f1eb4dd59a

#创建一个默认网络容器c1
root@ubuntu1804:/# docker run  -it --rm --name c1 alpine:3.11 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
23: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:c0:a8:00:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.2/24 brd 192.168.0.255 scope global eth0
       valid_lft forever preferred_lft forever


#创建一个自定义网络容器c2
root@ubuntu1804:/# docker run -it --rm --name c2 --network test1 alpine:3.11 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
25: eth0@if26: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:c0:c8:00:02 brd ff:ff:ff:ff:ff:ff
    inet 192.200.0.2/16 brd 192.200.255.255 scope global eth0
       valid_lft forever preferred_lft forever

#在c1上pingc2
/ # ping 192.200.0.2        #无法ping通自定义网络容器
PING 192.200.0.2 (192.200.0.2): 56 data bytes

#在c2上pingc1
/ # ping 192.168.0.2        #无法ping 通默认的网络容器
PING 192.168.0.2 (192.168.0.2): 56 data bytes


7.3.8.1 实战案例 1: 修改iptables实现同一宿主机上的不同网络的容器间通信

#确认开启ip_forward
root@ubuntu1804:/# cat /proc/sys/net/ipv4/ip_forward
1

#默认网络和自定义网络是两个不同的网桥
root@ubuntu1804:/# brctl show
bridge name bridge id       STP enabled interfaces
br-4816db734e1f     8000.0242354a0682   no      veth6bd0316
docker0     8000.024224f19787   no      vethd60f963


root@ubuntu1804:/# iptables -vnL
Chain INPUT (policy ACCEPT 1553 packets, 92476 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  737 61926 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
  737 61926 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0  
   10   933 ACCEPT     all  --  *      br-4816db734e1f  0.0.0.0/0            0.0.0.0/0        
    2   168 DOCKER     all  --  *      br-4816db734e1f  0.0.0.0/0            0.0.0.0/0        
    5   345 ACCEPT     all  --  br-4816db734e1f !br-4816db734e1f  0.0.0.0/0            0.0.0.0
    2   168 ACCEPT     all  --  br-4816db734e1f br-4816db734e1f  0.0.0.0/0            0.0.0.0/
   15  1708 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctst
    2   104 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           
   14  2295 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0           

Chain OUTPUT (policy ACCEPT 1165 packets, 94644 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
  240 20085 DOCKER-ISOLATION-STAGE-2  all  --  br-4816db734e1f !br-4816db734e1f  0.0.0.0/0    
  499 43035 DOCKER-ISOLATION-STAGE-2  all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/
   48  5553 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain DOCKER-ISOLATION-STAGE-2 (2 references)
 pkts bytes target     prot opt in     out     source               destination         
  485 40740 DROP       all  --  *      br-4816db734e1f  0.0.0.0/0            0.0.0.0/0        
  235 19740 DROP       all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           
   19  2640 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain DOCKER-USER (1 references)
 pkts bytes target     prot opt in     out     source               destination         
  826 76017 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           




root@ubuntu1804:/# iptables-save 
# Generated by iptables-save v1.6.1 on Mon Nov  8 14:56:25 2021
*filter
:INPUT ACCEPT [1594:95104]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [1196:99776]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o br-4816db734e1f -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-4816db734e1f -j DOCKER
-A FORWARD -i br-4816db734e1f ! -o br-4816db734e1f -j ACCEPT
-A FORWARD -i br-4816db734e1f -o br-4816db734e1f -j ACCEPT
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i br-4816db734e1f ! -o br-4816db734e1f -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o br-4816db734e1f -j DROP      #注意此行规则
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP              #注意此行规则 
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Mon Nov  8 14:56:25 2021
# Generated by iptables-save v1.6.1 on Mon Nov  8 14:56:25 2021
*nat
:PREROUTING ACCEPT [926:78289]
:INPUT ACCEPT [4:916]
:OUTPUT ACCEPT [15:1140]
:POSTROUTING ACCEPT [17:1308]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 192.200.0.0/16 ! -o br-4816db734e1f -j MASQUERADE
-A POSTROUTING -s 192.168.0.0/24 ! -o docker0 -j MASQUERADE
-A DOCKER -i br-4816db734e1f -j RETURN
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Mon Nov  8 14:56:25 2021
root@ubuntu1804:/# 


root@ubuntu1804:/# iptables-save > iptables.rule
root@ubuntu1804:/# vim iptables.rule 
#修改下面两行的规则
-A DOCKER-ISOLATION-STAGE-2 -o br-4816db734e1f -j ACCEPT        
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j ACCEPT    

#或者执行下面命令
[root@ubuntu1804 ~]#iptables -I DOCKER-ISOLATION-STAGE-2 -j ACCEPT
root@ubuntu1804:/# 
root@ubuntu1804:/# iptables-restore < iptables.rule 
root@ubuntu1804:/# 

##此时两个容器之间可以相互通信
PING 192.168.0.2 (192.168.0.2): 56 data bytes
64 bytes from 192.168.0.2: seq=37 ttl=63 time=0.149 ms
64 bytes from 192.168.0.2: seq=38 ttl=63 time=0.094 ms
64 bytes from 192.168.0.2: seq=39 ttl=63 time=0.095 ms
64 bytes from 192.168.0.2: seq=40 ttl=63 time=0.102 ms
64 bytes from 192.168.0.2: seq=41 ttl=63 time=0.098 ms
^C
--- 192.168.0.2 ping statistics ---
42 packets transmitted, 5 packets received, 88% packet loss
round-trip min/avg/max = 0.094/0.107/0.149 ms
/ # 


7.3.8.2 实战案例 2: 通过解决docker network connect 实现同一个宿主机不同网络的容器间通信

可以使用docker netowrk connect命令实现同一个宿主机不同网络的容器间相互通信

#将CONTAINER连入指定的NETWORK中,使此CONTAINER可以与NETWORK中的其它容器进行通信
docker network connect [OPTIONS] NETWORK CONTAINER

root@ubuntu1804:/# docker network connect --help

Usage:  docker network connect [OPTIONS] NETWORK CONTAINER

Connect a container to a network

Options:
      --alias strings           Add network-scoped alias for the container
      --driver-opt strings      driver options for the network
      --ip string               IPv4 address (e.g., 172.30.100.104)
      --ip6 string              IPv6 address (e.g., 2001:db8::33)
      --link list               Add link to another container
      --link-local-ip strings   Add a link-local address for the container

#将CONTAINER与指定的NETWORK断开连接,使此CONTAINER可以与CONTAINER中的其它容器进行无法通信
docker network disconnect [OPTIONS] NETWORK CONTAINER

root@ubuntu1804:/# docker network disconnect --help

Usage:  docker network disconnect [OPTIONS] NETWORK CONTAINER

Disconnect a container from a network

Options:
  -f, --force   Force the container to disconnect from a network


7.3.8.2.1 上面案例中c1和c2的容器间默认无法通信
#每个网络中有属于此网络的容器信息
root@ubuntu1804:/# docker inspect test1 
[
    {
        "Name": "test1",
        "Id": "4816db734e1fe7db1c1ac6e82ba2ebe8659d22769ca8d1d534d591f1eb4dd59a",
        "Created": "2021-11-08T14:08:01.389415754+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.200.0.0/16",
                    "Gateway": "192.200.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "402e7b723e3a237c85c0833d2c9386e561de6e1738c3b3850edfe46c3218afff": {
                "Name": "c2",
                "EndpointID": "ed9be5bd4b4259cac0293cdc700d5a97460a2a11ec90a1eff7b51075bcdd0158",
                "MacAddress": "02:42:c0:c8:00:02",
                "IPv4Address": "192.200.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]


root@ubuntu1804:/# docker inspect bridge 
[
    {
        "Name": "bridge",
        "Id": "701d8f3360859db47e1866c037fef069240783a2d7fc82c28069f8243652a4d9",
        "Created": "2021-11-08T08:37:23.276332677+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "192.168.0.1/24",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "e3b58f34bc263dff60ab285ff559d7d7c48305a0d08d050748dbab1aa063757f": {
                "Name": "c1",
                "EndpointID": "d7e88aaa5ec70475d89da6fea6fd44cc803116e66783b8aa40744687699a7556",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]


7.3.8.2.2 让默认网络中容器c1可以连通自定义网络test1的容器c2
root@ubuntu1804:/# docker network connect test1 c1
root@ubuntu1804:/# docker network inspect test1 
[
    {
        "Name": "test1",
        "Id": "4816db734e1fe7db1c1ac6e82ba2ebe8659d22769ca8d1d534d591f1eb4dd59a",
        "Created": "2021-11-08T14:08:01.389415754+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.200.0.0/16",
                    "Gateway": "192.200.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "402e7b723e3a237c85c0833d2c9386e561de6e1738c3b3850edfe46c3218afff": {
                "Name": "c2",
                "EndpointID": "ed9be5bd4b4259cac0293cdc700d5a97460a2a11ec90a1eff7b51075bcdd0158",
                "MacAddress": "02:42:c0:c8:00:02",
                "IPv4Address": "192.200.0.2/16",
                "IPv6Address": ""
            },

#网络test1中出现了容器c1的信息 
"e3b58f34bc263dff60ab285ff559d7d7c48305a0d08d050748dbab1aa063757f": {
                "Name": "c1",
                "EndpointID": "9dd76a44fd5f92d88e94be079869d5e9d4164965ed1ce6f600021781b28cc4e2",
                "MacAddress": "02:42:c0:c8:00:03",
                "IPv4Address": "192.200.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

#在c1容器中可以看到新添加了一个网卡,并且分配了test1网络的IP信息
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
23: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:c0:a8:00:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.2/24 brd 192.168.0.255 scope global eth0
       valid_lft forever preferred_lft forever
27: eth1@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:c0:c8:00:03 brd ff:ff:ff:ff:ff:ff
    inet 192.200.0.3/16 brd 192.200.255.255 scope global eth1
       valid_lft forever preferred_lft forever
/ #
#c1可以连接c2容器
PING 192.200.0.2 (192.200.0.2): 56 data bytes
64 bytes from 192.200.0.2: seq=37 ttl=63 time=0.149 ms
64 bytes from 192.200.0.2: seq=38 ttl=63 time=0.094 ms
^C
--- 192.200.0.2 ping statistics ---
42 packets transmitted, 5 packets received, 88% packet loss
round-trip min/avg/max = 0.094/0.107/0.149 ms

#在c2容器中没有变化,仍然无法连接c1
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
25: eth0@if26: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:c0:c8:00:02 brd ff:ff:ff:ff:ff:ff
    inet 192.200.0.2/16 brd 192.200.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 
/ # ping 192.168.0.2
PING 192.168.0.2 (192.168.0.2): 56 data bytes
^C
--- 192.168.0.2 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

7.3.8.2.3 让自定义网络中容器c2可以连通默认网络的容器c1(与上述相同)
#将自定义网络中的容器test2也加入到默认网络中,使之和默认网络中的容器test1通信
root@ubuntu1804:/# docker network connect bridge c2
root@ubuntu1804:/# docker network inspect bridge

7.3.8.2.4 断开不同网络中容器的通信
#将c1 断开和网络test1中其它容器的通信
root@ubuntu1804:/# docker network disconnect test1 c1
root@ubuntu1804:/# docker network inspect test1 
[
    {
        "Name": "test1",
        "Id": "4816db734e1fe7db1c1ac6e82ba2ebe8659d22769ca8d1d534d591f1eb4dd59a",
        "Created": "2021-11-08T14:08:01.389415754+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.200.0.0/16",
                    "Gateway": "192.200.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "402e7b723e3a237c85c0833d2c9386e561de6e1738c3b3850edfe46c3218afff": {
                "Name": "c2",
                "EndpointID": "ed9be5bd4b4259cac0293cdc700d5a97460a2a11ec90a1eff7b51075bcdd0158",
                "MacAddress": "02:42:c0:c8:00:02",
                "IPv4Address": "192.200.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]
root@ubuntu1804:/# 

/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
23: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:c0:a8:00:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.2/24 brd 192.168.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 


7.4 实现跨宿主机的容器之间网络互联

7.4.1 方式1: 利用桥接实现跨宿主机的容器间互联(局限较大不利于推广,基本不用)

#分别将两个宿主机都执行下面操作
[root@ubuntu1804 ~]#apt -y install bridge-utils
[root@ubuntu1804 ~]#brctl addif docker0 eth0
#注意:将eth0网卡和docker0加入到网桥中,xshell将无法连接到设备,所以需要直接在设备上进行下面的操作


#在两个宿主机上各启动一个容器,需要确保IP不同,相互测试访问
#第一个宿主机的容器
[root@ubuntu1804 ~]#docker run -it --name b1 busybox
/ # hostname -i
172.17.0.2
/ # httpd -h /data/html/ -f -v
[::ffff:172.17.0.3]:42488:response:200

#第二个宿主机的容器(如果只启动一个容器默认地址为172.17.0.2所以要先开一个容器占用172.17.0.2这个地址再启动b2这个容器,此时b2的地址才是172.17.0.3)
[root@ubuntu1804 ~]#docker run -it --name b2 busybox
/ # hostname -i
172.17.0.3
/#wget-q0 - http://172.17.0.2
httpd website in busybox

7.4.2 方式2: 利用NAT实现跨主机的容器间互联(此方式适用于小型网络,复杂网络使用k8s)

7.4.2.1 docker跨主机互联实现说明

跨主机互联是说A宿主机的容器可以访问B主机上的容器,但是前提是保证各宿主机之间的网络是可以相互通信的,然后各容器才可以通过宿主机访问到对方的容器

实现原理: 是在宿主机做一个网络路由就可以实现A宿主机的容器访问B主机的容器的目的

注意: 此方式只适合小型网络环境,复杂的网络或者大型的网络可以使用google开源的k8s进行互联

7.4.2.2 修改各宿主机网段

Docker默认网段是172.17.0.x/24,而且每个宿主机都是一样的,因此要做路由的前提就是各个主机的网络不能一致

7.4.2.2.1 第一个宿主机A上更改网段
[root@ubuntu1804 ~]#vim /etc/docker/daemon.json
[root@ubuntu1804 ~]#cat /etc/docker/daemon.json
{
 "bip": "192.168.100.1/24",
 "registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}

[root@ubuntu1804 ~]# systemctl restart docker
[root@ubuntu1804 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
 link/ether 00:0c:29:6b:54:d3 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.101/24 brd 10.0.0.255 scope global eth0
   valid_lft forever preferred_lft forever
 inet6 fe80::20c:29ff:fe6b:54d3/64 scope link
   valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
 link/ether 02:42:e0:ef:72:05 brd ff:ff:ff:ff:ff:ff
 inet 192.168.100.1/24 brd 192.168.100.255 scope global docker0
   valid_lft forever preferred_lft forever
 inet6 fe80::42:e0ff:feef:7205/64 scope link
   valid_lft forever preferred_lft forever
[root@ubuntu1804 ~]#route -n
Kernel IP routing table
Destination   Gateway     Genmask     Flags Metric Ref  Use Iface
0.0.0.0     10.0.0.2     0.0.0.0     UG   0    0     0 eth0
10.0.0.0     0.0.0.0     255.255.255.0  U   0    0     0 eth0
192.168.100.0  0.0.0.0     255.255.255.0  U   0    0     0 docker0

7.4.2.2.2 第二个宿主机B更改网段
[root@ubuntu1804 ~]#vim /etc/docker/daemon.json
{
 "bip": "192.168.200.1/24",
 "registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}
[root@ubuntu1804 ~]#systemctl restart docker

[root@ubuntu1804 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
 link/ether 00:0c:29:01:f3:0c brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.102/24 brd 10.0.0.255 scope global eth0
   valid_lft forever preferred_lft forever
 inet6 fe80::20c:29ff:fe01:f30c/64 scope link
   valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
 link/ether 02:42:e8:c0:a4:d8 brd ff:ff:ff:ff:ff:ff
 inet 192.168.200.1/24 brd 192.168.200.255 scope global docker0
   valid_lft forever preferred_lft forever
  inet6 fe80::42:e8ff:fec0:a4d8/64 scope link
   valid_lft forever preferred_lft forever

[root@ubuntu1804 ~]#route -n
Kernel IP routing table
Destination   Gateway     Genmask     Flags Metric Ref  Use Iface
0.0.0.0     10.0.0.2     0.0.0.0     UG   0    0     0 eth0
10.0.0.0     0.0.0.0     255.255.255.0  U   0    0     0 eth0
192.168.200.0  0.0.0.0     255.255.255.0  U   0    0     0 docker0

7.4.2.3 在两个宿主机分别启动一个容器

第一个宿主机启动容器server1

[root@ubuntu1804 ~]#docker run -it --name server1 --rm alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
 link/ether 02:42:c0:a8:64:02 brd ff:ff:ff:ff:ff:ff
 inet 192.168.100.2/24 brd 192.168.100.255 scope global eth0
   valid_lft forever preferred_lft forever
/ # route -n
Kernel IP routing table
Destination   Gateway     Genmask     Flags Metric Ref  Use Iface
0.0.0.0     192.168.100.1  0.0.0.0     UG   0    0     0 eth0
192.168.100.0  0.0.0.0     255.255.255.0  U   0    0     0 eth0

第二个宿主机启动容器server2

[root@ubuntu1804 ~]#docker run -it --name server2 --rm alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
 link/ether 02:42:c0:a8:c8:02 brd ff:ff:ff:ff:ff:ff
 inet 192.168.200.2/24 brd 192.168.200.255 scope global eth0
   valid_lft forever preferred_lft forever
/ # route -n
Kernel IP routing table
Destination   Gateway     Genmask     Flags Metric Ref  Use Iface
0.0.0.0     192.168.200.1  0.0.0.0     UG   0    0     0 eth0
192.168.200.0  0.0.0.0     255.255.255.0  U   0    0     0 eth0

从第一个宿主机的容器server1无法和第二个宿主机的server2相互访问

[root@ubuntu1804 ~]#docker run -it --name server1 --rm alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
 link/ether 02:42:0a:64:00:02 brd ff:ff:ff:ff:ff:ff
 inet 10.100.0.2/16 brd 10.100.255.255 scope global eth0
   valid_lft forever preferred_lft forever
/ # ping -c1 192.168.200.2
PING 192.168.200.2 (192.168.200.2): 56 data bytes
--- 192.168.200.2 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

7.4.2.4 ★添加静态路由和iptables规则

在各宿主机添加静态路由,网关指向对方宿主机的IP

7.4.2.4.1 在第一台宿主机添加静态路由和iptables规则
[root@ubuntu1804 ~]#route add -net 192.168.200.0/24 gw 10.0.0.102
[root@ubuntu1804 ~]#iptables -A FORWARD -s 10.0.0.0/24 -j ACCEPT

7.4.2.4.2 在第二台宿主机添加静态路由和iptables规则
[root@ubuntu1804 ~]#route add -net 192.168.100.0/24 gw 10.0.0.101
[root@ubuntu1804 ~]#iptables -A FORWARD -s 10.0.0.0/24 -j ACCEPT

7.4.2.5 测试跨宿主机之间容器互联

宿主机A的容器server1访问宿主机B容器server2,同时在宿主机B上tcpdump抓包观察

/ # ping -c1 192.168.200.2
PING 192.168.200.2 (192.168.200.2): 56 data bytes
64 bytes from 192.168.200.2: seq=0 ttl=62 time=1.022 ms
--- 192.168.200.2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 1.022/1.022/1.022 ms
#宿主机B的抓包可以观察到
[root@ubuntu1804 ~]#tcpdump -i eth0 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
16:57:37.912925 IP 10.0.0.101 > 192.168.200.2: ICMP echo request, id 2560, seq 0, length 64
16:57:37.913208 IP 192.168.200.2 > 10.0.0.101: ICMP echo reply, id 2560, seq 0, length 64

宿主机B的容器server2访问宿主机B容器server1,同时在宿主机A上tcpdump抓包观察

/ # ping -c1 192.168.100.2
PING 192.168.100.2 (192.168.100.2): 56 data bytes
64 bytes from 192.168.100.2: seq=0 ttl=62 time=1.041 ms
--- 192.168.100.2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 1.041/1.041/1.041 ms
#宿主机A的抓包可以观察到
[root@ubuntu1804 ~]#tcpdump -i eth0 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
16:59:11.775784 IP 10.0.0.102 > 192.168.100.2: ICMP echo request, id 2560, seq 0, length 64
16:59:11.776113 IP 192.168.100.2 > 10.0.0.102: ICMP echo reply, id 2560, seq 0, length 64

7.4.2.6 创建第三个容器测试

#在第二个宿主机B上启动第一个提供web服务的nginx容器server3
#注意无需打开端口映射
[root@ubuntu1804 ~]#docker run -d --name server3 centos7-nginx:1.6.1
69fc554fd00e4f7880c139283b64f2701feafb91047b217906b188c1f461b699
[root@ubuntu1804 ~]#docker exec -it server3 bash
[root@69fc554fd00e /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
   inet 192.168.200.3 netmask 255.255.255.0 broadcast 192.168.200.255
   ether 02:42:c0:a8:c8:03 txqueuelen 0 (Ethernet)
   RX packets 8 bytes 656 (656.0 B)
   RX errors 0 dropped 0 overruns 0 frame 0
   TX packets 0 bytes 0 (0.0 B)
   TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
   inet 127.0.0.1 netmask 255.0.0.0
   loop txqueuelen 1000 (Local Loopback)
   RX packets 0 bytes 0 (0.0 B)
   RX errors 0 dropped 0 overruns 0 frame 0
   TX packets 0 bytes 0 (0.0 B)
   TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


#从server1中访问server3的页面可以成功
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
 link/ether 02:42:0a:64:00:02 brd ff:ff:ff:ff:ff:ff
 inet 10.100.0.2/16 brd 10.100.255.255 scope global eth0
   valid_lft forever preferred_lft forever
/ # wget -qO - http://192.168.200.3/app
Test Page in app
/ #

#从server3容器观察访问日志,可以看到来自于第一个宿主机,而非server1容器
[root@69fc554fd00e /]# tail -f /apps/nginx/logs/access.log
10.0.0.101 - - [02/Feb/2020:09:02:00 +0000] "GET /app HTTP/1.1" 301 169 "-" "Wget"

#用tcpdump抓包80/tcp的包,可以观察到以下内容
[root@ubuntu1804 ~]#tcpdump -i eth0 -nn port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
17:03:35.885627 IP 10.0.0.101.43578 > 192.168.200.3.80: Flags [S], seq 3672256868, win 29200, options [mss 1460,sackOK,TS val 4161963574 ecr 0,nop,wscale 7], length 0
17:03:35.885768 IP 192.168.200.3.80 > 10.0.0.101.43578: Flags [S.], seq 2298407060, ack 3672256869, win 28960, options [mss 1460,sackOK,TS val 3131173298 ecr 4161963574,nop,wscale 7], length 0
17:03:35.886312 IP 10.0.0.101.43578 > 192.168.200.3.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 4161963575 ecr 3131173298], length 0
17:03:35.886507 IP 10.0.0.101.43578 > 192.168.200.3.80: Flags [P.], seq 1:80, ack 1, win 229, options [nop,nop,TS val 4161963575 ecr 3131173298], length 79: HTTP: GET /app HTTP/1.1
17:03:35.886541 IP 192.168.200.3.80 > 10.0.0.101.43578: Flags [.], ack 80, win 227, options [nop,nop,TS val 3131173299 ecr 4161963575], length 0
17:03:35.887179 IP 192.168.200.3.80 > 10.0.0.101.43578: Flags [P.], seq 1:365, ack 80, win 227, options [nop,nop,TS val 3131173299 ecr 4161963575], length 364: HTTP: HTTP/1.1 301 Moved Permanently
17:03:35.887222 IP 192.168.200.3.80 > 10.0.0.101.43578: Flags [F.], seq 365, ack 80, win 227, options [nop,nop,TS val 3131173299 ecr 4161963575], length 0
17:03:35.890139 IP 10.0.0.101.43580 > 192.168.200.3.80: Flags [.], ack 1660534352, win 229, options [nop,nop,TS val 4161963579 ecr 3131173301], length 0
17:03:35.890297 IP 10.0.0.101.43580 > 192.168.200.3.80: Flags [P.], seq 0:80, ack 1, win 229, options [nop,nop,TS val 4161963579 ecr 3131173301], length 80: HTTP: GET /app/ HTTP/1.1
17:03:35.890327 IP 192.168.200.3.80 > 10.0.0.101.43580: Flags [.], ack 80, win 227, opons [nop,nop,TS val 3131173303 ecr 4161963579], length 0

7.4.3 方式3: 利用Open vSwitch实现跨主机的容器间互联

7.4.3.1 Open vSwitch介绍

Open vSwitch,即Open Virtual Switch开放虚拟交换机,简称OVS, 是在开源的Apache2.0许可下的产品级质量的多层虚拟交换机。由Nicira Networks开发,主要实现代码为可移植的C代码。它的目的是让大规模网络自动化可以通过编程扩展,同时仍然支持标准的管理接口和协议(例如NetFlow, sFlow,SPAN, RSPAN, CLI, LACP, 802.1ag) ,即Open vSwitch通过软件的方式实现了交换机功能

跟传统的物理交换机相比,虚拟交换机同样具备众多优点,一是配置更加灵活。一台普通的服务器可以配置出数十台甚至上百台虚拟交换机,且端口数目可以灵活选择。例如,VMware的ESXi一台服务器可以仿真出248台虚拟交换机,且每台交换机预设虚拟端口即可达56个;二是成本更加低廉,通过虚拟交换往往可以获得昂贵的普通交换机才能达到的性能,例如微软的Hyper-V平台,虚拟机与虚拟交换机之间的联机速度轻易可达10Gbps。

官网: http://www.openvswitch.org/

使用Open vSwitch实现跨主机容器连接一原理

什么是GRE隧道?

GRE:通用路由协议封装
隧道技术(Tunneling) 是一种通过使用互联网络的基础设施在网络之间传递数据的方式。使用隧道传递的数据(或负载)可以是不同协议的数据帧或包。隧道协议将其它协议的数据帧或包重新封装然后通过隧道发送。新的帧头提供路由信息,以便通过互联网传递被封装的负载数据。

7.4.3.2 利用Open vSwitch实现docker跨主机网络

实现目标: 将两台主机的容器利用Open vSwitch连接起来,实现互联互通

7.4.3.2.1 环境准备
主机名 操作系统 宿主机IP docker0 IP 容器IP
ovs1 ubuntu 18.04 10.0.0.101/24 192.168.1.1/24 192.168.1.0/24
ovs2 ubuntu 18.04 10.0.0.102/24 192.168.2.1/24 192.168.2.0/24
7.4.3.2.2 修改两台主机的docker0分别使用不同的网段
#配置第一台主机
[root@ovs1 ~]#vim /etc/docker/daemon.json
{
 "bip": "192.168.1.1/24",
 "registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}

[root@ovs1 ~]#systemctl restart docker
[root@ovs1 ~]#ip add show docker0
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
 link/ether 02:42:dc:29:03:6c brd ff:ff:ff:ff:ff:ff
 inet 192.168.1.1/24 brd 192.168.1.255 scope global docker0
   valid_lft forever preferred_lft forever

#配置第二台主机
[root@ovs2 ~]#vim /etc/docker/daemon.json
{
 "bip": "192.168.2.1/24",
 "registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}

[root@ovs2 ~]#systemctl restart docker
[root@ovs2 ~]#ip add show docker0
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
 link/ether 02:42:e2:38:84:83 brd ff:ff:ff:ff:ff:ff
 inet 192.168.2.1/24 brd 192.168.2.255 scope global docker0
   valid_lft forever preferred_lft forever

7.4.3.2.3 在两个宿主机安装openvswitch-switch和bridge-utils和确认版本
#在第一个主机安装包
[root@ovs1 ~]#apt -y install openvswitch-switch bridge-utils
[root@ovs1 ~]#ps -e | grep ovs
 6766 ?     00:00:00 ovsdb-server
 6826 ?     00:00:00 ovs-vswitchd

#查看ovs版本信息和ovs支持的OpenFlow协议的版本
[root@ovs1 ~]#ovs-appctl --version
ovs-appctl (Open vSwitch) 2.9.5
[root@ovs1 ~]#ovs-ofctl --version
ovs-ofctl (Open vSwitch) 2.9.5
OpenFlow versions 0x1:0x5

#查看网桥
[root@ovs1 ~]#brctl show        
bridge name bridge id STP enabled interfaces
docker0 8000.0242dc29036c no

#在第二个主机安装包
[root@ovs2 ~]#apt -y install openvswitch-switch bridge-utils
[root@ovs2 ~]#ps -e | grep ovs
 6618 ?     00:00:00 ovsdb-server
 6680 ?     00:00:00 ovs-vswitchd

#查看ovs版本信息和ovs支持的OpenFlow协议的版本
[root@ovs2 ~]#ovs-appctl --version
ovs-appctl (Open vSwitch) 2.9.5
[root@ovs2 ~]#ovs-ofctl --version
ovs-ofctl (Open vSwitch) 2.9.5
OpenFlow versions 0x1:0x5
[root@ovs2 ~]#brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242e2388483 no

7.4.3.2.4 ★在两个宿主机都创建obr0网桥并激活
[root@ovs1 ~]#ovs-vsctl add-br obr0
[root@ovs1 ~]#ip link set dev obr0 up
[root@ovs1 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
 link/ether 00:0c:29:6b:54:d3 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.101/24 brd 10.0.0.255 scope global eth0
   valid_lft forever preferred_lft forever
 inet6 fe80::20c:29ff:fe6b:54d3/64 scope link
   valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
 link/ether 02:42:dc:29:03:6c brd ff:ff:ff:ff:ff:ff
 inet 192.168.1.1/24 brd 192.168.1.255 scope global docker0
   valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
 link/ether ce:ff:6f:7f:4b:11 brd ff:ff:ff:ff:ff:ff
5: obr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
 link/ether f2:2b:d7:d8:a1:4d brd ff:ff:ff:ff:ff:ff
 inet6 fe80::f02b:d7ff:fed8:a14d/64 scope link
   valid_lft forever preferred_lft forever
  
#查看网桥
[root@ovs1 ~]#brctl show        #传统方式看不到通过ovs创建的网桥
bridge name bridge id STP enabled interfaces
docker0 8000.0242dc29036c no


#第二台主机重复上面
[root@ovs2 ~]#ovs-vsctl add-br obr0
[root@ovs2 ~]#ip link set dev obr0 up
[root@ovs2 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
 link/ether 00:0c:29:01:f3:0c brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.102/24 brd 10.0.0.255 scope global eth0
   valid_lft forever preferred_lft forever
 inet6 fe80::20c:29ff:fe01:f30c/64 scope link
   valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
 link/ether 02:42:e2:38:84:83 brd ff:ff:ff:ff:ff:ff
 inet 192.168.2.1/24 brd 192.168.2.255 scope global docker0
   valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
 link/ether d6:29:ca:3a:9d:99 brd ff:ff:ff:ff:ff:ff
5: obr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
 link/ether 82:4f:05:e3:5d:42 brd ff:ff:ff:ff:ff:ff
 inet6 fe80::804f:5ff:fee3:5d42/64 scope link
   valid_lft forever preferred_lft forever

7.4.3.2.5 ★在两个宿主机创建gre隧道(remote_ip为peer宿主机ip)

注意: 如果有多台docker主机需要构建网络创建多个gre隧道

#一条命令实现,remote_ip指向另一台宿主机的IP
[root@ovs1 ~]#ovs-vsctl add-port obr0 gre0 -- set Interface gre0 type=gre
options:remote_ip=10.0.0.102
#或者两条命令实现
[root@ovs1 ~]#ovs-vsctl add-port obr0 gre0
[root@ovs1 ~]#ovs-vsctl set Interface gre0 type=gre options:remote_ip=10.0.0.102
[root@ovs1 ~]#ovs-vsctl list-ports obr0
gre0

[root@ovs1 ~]#ovs-vsctl show
84cbdad7-4731-4c2e-b7d7-eecb4a56d27b
 Bridge "obr0"
   Port "gre0"
     Interface "gre0"
       type: gre
       options: {remote_ip="10.0.0.102"}
   Port "obr0"
     Interface "obr0"
       type: internal
 ovs_version: "2.9.5"
 
[root@ovs1 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
 link/ether 00:0c:29:6b:54:d3 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.101/24 brd 10.0.0.255 scope global eth0
   valid_lft forever preferred_lft forever
 inet6 fe80::20c:29ff:fe6b:54d3/64 scope link
   valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
 link/ether 02:42:dc:29:03:6c brd ff:ff:ff:ff:ff:ff
 inet 192.168.1.1/24 brd 192.168.1.255 scope global docker0
   valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
 link/ether ce:ff:6f:7f:4b:11 brd ff:ff:ff:ff:ff:ff
5: obr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
 link/ether f2:2b:d7:d8:a1:4d brd ff:ff:ff:ff:ff:ff
 inet6 fe80::f02b:d7ff:fed8:a14d/64 scope link
   valid_lft forever preferred_lft forever
6: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
 link/gre 0.0.0.0 brd 0.0.0.0
7: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
8: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
9: gre_sys@NONE: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc fq_codel master ovs-system state UNKNOWN group default qlen 1000
 link/ether ce:d2:c1:4e:be:c6 brd ff:ff:ff:ff:ff:ff
 inet6 fe80::ccd2:c1ff:fe4e:bec6/64 scope link
   valid_lft forever preferred_lft forever


#配置第二个宿主机
[root@ovs2 ~]#ovs-vsctl add-port obr0 gre0 -- set Interface gre0 type=gre options:remote_ip=10.0.0.101
[root@ovs2 ~]#ovs-vsctl list-ports obr0
gre0

[root@ovs2 ~]#ovs-vsctl show
e6a3aab3-e224-4834-85fc-2516b33a67e2
 Bridge "obr0"
   Port "gre0"
     Interface "gre0"
       type: gre
       options: {remote_ip="10.0.0.101"}
   Port "obr0"
     Interface "obr0"
       type: internal
 ovs_version: "2.9.5"
 
[root@ovs2 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
 link/ether 00:0c:29:01:f3:0c brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.102/24 brd 10.0.0.255 scope global eth0
   valid_lft forever preferred_lft forever
 inet6 fe80::20c:29ff:fe01:f30c/64 scope link
   valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
 link/ether 02:42:e2:38:84:83 brd ff:ff:ff:ff:ff:ff
 inet 192.168.2.1/24 brd 192.168.2.255 scope global docker0
   valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
 link/ether d6:29:ca:3a:9d:99 brd ff:ff:ff:ff:ff:ff
5: obr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
 link/ether 82:4f:05:e3:5d:42 brd ff:ff:ff:ff:ff:ff
 inet6 fe80::804f:5ff:fee3:5d42/64 scope link
   valid_lft forever preferred_lft forever
6: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
 link/gre 0.0.0.0 brd 0.0.0.0
7: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
8: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
10: gre_sys@NONE: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc fq_codel master ovs-system state UNKNOWN group default qlen 1000
 link/ether 0a:98:48:d9:5f:83 brd ff:ff:ff:ff:ff:ff
 inet6 fe80::898:48ff:fed9:5f83/64 scope link
   valid_lft forever preferred_lft forever

7.4.3.2.6 ★在两个宿主机将obr0作为接口加入docker0网桥
#第一台宿主机执行
[root@ovs1 ~]#brctl addif docker0 obr0
[root@ovs1 ~]#brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242dc29036c no obr0

#第二台宿主机执行同样操作
[root@ovs2 ~]#brctl addif docker0 obr0
[root@ovs2 ~]#brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242e2388483 no obr0

7.4.3.2.7 ★在两个宿主机添加静态路由(网段地址为 peer Docker网段)
#ovs1 添加 peer docker net
[root@ovs1 ~]#ip route add 192.168.2.0/24 dev docker0 
[root@ovs1 ~]#route -n
Kernel IP routing table
Destination   Gateway     Genmask     Flags Metric Ref  Use Iface
0.0.0.0     10.0.0.2     0.0.0.0     UG   0    0     0 eth0
10.0.0.0     0.0.0.0     255.255.255.0  U   0    0     0 eth0
192.168.1.0    0.0.0.0     255.255.255.0  U   0    0     0 docker0
192.168.2.0    0.0.0.0     255.255.255.0  U   0    0     0 docker0

#ovs2 添加 peer docker net
[root@ovs2 ~]#ip route add 192.168.1.0/24 dev docker0 
[root@ovs2 ~]#route -n
Kernel IP routing table
Destination   Gateway     Genmask     Flags Metric Ref  Use Iface
0.0.0.0     10.0.0.2     0.0.0.0     UG   0    0     0 eth0
10.0.0.0     0.0.0.0     255.255.255.0  U   0    0     0 eth0
192.168.1.0    0.0.0.0     255.255.255.0  U   0    0     0 docker0
192.168.2.0    0.0.0.0     255.255.255.0  U   0    0     0 docker0

7.4.3.2.8 在两个宿主机测试跨主机的容器之间的连通性
[root@ovs1 ~]#docker run -it alpine /bin/sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN qlen 1000
 link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN qlen 1000
 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
4: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN qlen 1000
 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
 link/ether 02:42:ac:11:01:02 brd ff:ff:ff:ff:ff:ff
 inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0
   valid_lft forever preferred_lft forever

/ # ping -c 3 192.168.2.2
PING 192.168.2.2 (192.168.2.2): 56 data bytes
64 bytes from 192.168.2.2: seq=0 ttl=63 time=4.459 ms
64 bytes from 192.168.2.2: seq=1 ttl=63 time=1.279 ms
64 bytes from 192.168.2.2: seq=2 ttl=63 time=0.517 ms
--- 192.168.2.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.517/2.085/4.459 ms

[root@ovs2 ~]#docker run -it alpine /bin/sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN qlen 1000
 link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN qlen 1000
 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
4: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN qlen 1000
 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
 link/ether 02:42:ac:11:02:02 brd ff:ff:ff:ff:ff:ff
 inet 192.168.2.2/24 brd 192.168.2.255 scope global eth0
   valid_lft forever preferred_lft forever

/ # ping -c 3 192.168.1.2
PING 192.168.1.2 (192.168.1.2): 56 data bytes
64 bytes from 192.168.1.2: seq=0 ttl=63 time=1.553 ms
64 bytes from 192.168.1.2: seq=1 ttl=63 time=1.136 ms
64 bytes from 192.168.1.2: seq=2 ttl=63 time=1.176 ms
--- 192.168.1.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 1.136/1.288/1.553 ms
/ #

在第二个主机上再打开一个nginx容器,从第一个主机的容器访问,观察来源的IP

[root@ovs2 ~]#docker pull nginx
[root@ovs2 ~]#docker run -d --name nginx nginx
d3c26005a7626628f7baf017481217b36e3d69dabfa6cc86fe125f9548e7333c
[root@ovs2 ~]#docker exec -it nginx hostname -I
192.168.2.2
[root@ovs2 ~]#docker logs -f nginx
192.168.1.2 - - [27/Feb/2020:09:57:18 +0000] "GET / HTTP/1.1" 200 612 "-" "Wget" "-"

#从第一个主机的容器发起请求,可以查看到上面的访问日志输出
[root@ovs1 ~]#docker run -it alpine wget -qO - http://192.168.2.2/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
 body {
   width: 35em;
   margin: 0 auto;
   font-family: Tahoma, Verdana, Arial, sans-serif;
 }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

7.4.3.2.9 在两个宿主机用脚本保存配置用于开机启动
#ovs1配置
[root@ovs1 ~]#cat > net.sh <<EOF 
#!/bin/bash
ip link set dev obr0 up
brctl addif docker0 obr0
ip route add 192.168.2.0/24 dev docker0
EOF
[root@ovs1 ~]#chmod +x net.sh

#ovs2配置
[root@ovs2 ~]#cat > net.sh <<EOF 
#!/bin/bash
ip link set dev obr0 up
brctl addif docker0 obr0
ip route add 192.168.1.0/24 dev docker0
EOF
[root@ovs1 ~]#chmod +x net.sh

7.4.4 方式4: 使用 weave 实现跨主机的容器间互联

7.4.4.1 weave 介绍

weave 原意编织,意在建立一个虚拟的网络,用于将运行在不同主机的Docker容器连接起来

官网: http://weave.works
github链接: https://github.com/weaveworks/weave#readme

7.4.4.2 weave实现跨主机容器互联的流程

官方文档: https://www.weave.works/docs/net/latest/install/using-weave/

weave实现跨主机容器互联的流程

  • 安装weave
  • 启动weave $weave launch
  • 连接不同主机
  • 通过weave启动容器

7.4.4.3 实战案例: 通过weave 实现跨主机容器的互联

7.4.4.3.1 环境准备
主机名 操作系统 宿主机IP docker0 IP 容器
ubuntu1804 ubuntu 18.04 10.0.0.100/24 172.17.0.1/16 a1
centos7 centos 7.8 10.0.0.200/24 172.17.0.1/16 a2
7.4.4.3.2 安装weave
[root@ubuntu1804 ~]#wget -O /usr/bin/weave
https://raw.githubusercontent.com/zettio/weave/master/weave
--2021-11-08 17:05:08--
https://raw.githubusercontent.com/zettio/weave/master/weave
Resolving raw.githubusercontent.com (raw.githubusercontent.com)...
151.101.108.133
Connecting to raw.githubusercontent.com
(raw.githubusercontent.com)|151.101.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 52484 (51K) [text/plain]
Saving to: ‘/usr/bin/weave’
/usr/bin/weave        100%[===========================================>]
 51.25K  174KB/s   in 0.3s  
--2021-11-08 17:05:15 (174 KB/s) - ‘/usr/bin/weave’ saved [52484/52484]

[root@ubuntu1804 ~]#ll /usr/bin/weave
-rw-r--r-- 1 root root 52484 Jul 23 17:05 /usr/bin/weave

#在第二个宿主机重复一样的操作
[root@centos7 ~]#wget -O /usr/bin/weave
https://raw.githubusercontent.com/zettio/weave/master/weave[root@centos7 ~]#ll
/usr/bin/weave -h
-rw-r--r-- 1 root root 52K Jul 23 17:08 /usr/bin/weave

7.4.4.3.3 第一个宿主机启动weave
[root@ubuntu1804 ~]#chmod +x /usr/bin/weave
[root@ubuntu1804 ~]#weave launch
latest: Pulling from weaveworks/weave
21c83c524219: Pull complete
cfdfcbee9cb6: Pull complete
a56e93dd024c: Pull complete
fea445d5ce38: Pull complete
59db32ebe99d: Pull complete
Digest: sha256:743ae84161f97bba965d5565344b04ff40770b32b06e1f55c8dfb1250c880e88
Status: Downloaded newer image for weaveworks/weave:latest
docker.io/weaveworks/weave:latest
latest: Pulling from weaveworks/weavedb
72bf8a6af285: Pull complete
Digest: sha256:7badb003b9c0bf5c51bf801be2a4d5d371f0738818f9cbe60a508f54fd07de9a
Status: Downloaded newer image for weaveworks/weavedb:latest
docker.io/weaveworks/weavedb:latest
Unable to find image 'weaveworks/weaveexec:latest' locally
latest: Pulling from weaveworks/weaveexec
21c83c524219: Already exists
cfdfcbee9cb6: Already exists
a56e93dd024c: Already exists
fea445d5ce38: Already exists
59db32ebe99d: Already exists
12d4ba695a4d: Pull complete
298ee7d058fb: Pull complete
85e41461f1eb: Pull complete
f322a47dd011: Pull complete
Digest: sha256:e666e66bf10c9da5dce52b777e86f9fbb62157169614a80f2dbe324b335c5602
Status: Downloaded newer image for weaveworks/weaveexec:latest
0244d489c1dfe1b403a6e1cf228f260e84cafabea408d8cdabee1b0ca09c7519

7.4.4.3.4 启动第二宿主机的weave并连接第一个宿主机
#在第二个宿主机启动weave
[root@centos7 ~]#chmod +x /usr/bin/weave
[root@centos7 ~]#weave launch 10.0.0.100
latest: Pulling from weaveworks/weave
21c83c524219: Pull complete
cfdfcbee9cb6: Pull complete
a56e93dd024c: Pull complete
fea445d5ce38: Pull complete
59db32ebe99d: Pull complete
Digest: sha256:743ae84161f97bba965d5565344b04ff40770b32b06e1f55c8dfb1250c880e88
Status: Downloaded newer image for weaveworks/weave:latest
docker.io/weaveworks/weave:latest
latest: Pulling from weaveworks/weavedb
72bf8a6af285: Pull complete
Digest: sha256:7badb003b9c0bf5c51bf801be2a4d5d371f0738818f9cbe60a508f54fd07de9a
Status: Downloaded newer image for weaveworks/weavedb:latest
docker.io/weaveworks/weavedb:latest
Unable to find image 'weaveworks/weaveexec:latest' locally
latest: Pulling from weaveworks/weaveexec
21c83c524219: Already exists
cfdfcbee9cb6: Already exists
a56e93dd024c: Already exists
fea445d5ce38: Already exists
59db32ebe99d: Already exists
12d4ba695a4d: Pull complete
298ee7d058fb: Pull complete
85e41461f1eb: Pull complete
f322a47dd011: Pull complete
Digest: sha256:e666e66bf10c9da5dce52b777e86f9fbb62157169614a80f2dbe324b335c5602
Status: Downloaded newer image for weaveworks/weaveexec:latest
ab2ebef1670374ae81c3031c9de96df4136e55749f914eaddcb8e5e5aee60c6c

#查看两个宿主机的连接
[root@centos7 ~]#ss -nt
State   Recv-Q Send-Q  LocalAddress:Port   Peer Address:Port      
ESTAB    0    0   10.0.0.200:46521  10.0.0.100:6783  

7.4.4.3.5 通过weave 启动容器
#第一个宿主机初始化环境
[root@ubuntu1804 ~]#eval $(weave env)

#第一个宿主机开启动容器
[root@ubuntu1804 ~]#docker run --name a1 -ti weaveworks/ubuntu
root@a1:/# hostname -i
10.32.0.1
root@a1:/# ping -c1 a2
PING a2 (10.44.0.0) 56(84) bytes of data.
64 bytes from a2.weave.local (10.44.0.0): icmp_seq=1 ttl=64 time=3.33 ms

#第二个宿主机初始化环境
[root@centos7 ~]#eval $(weave env)

#第二个宿主机开启动容器
[root@centos7 ~]#docker run --name a2 -ti weaveworks/ubuntu
root@a2:/# hostname -i
10.44.0.0
root@a2:/# ping a1 -c1
PING a1 (10.32.0.1) 56(84) bytes of data.
64 bytes from a1.weave.local (10.32.0.1): icmp_seq=1 ttl=64 time=0.474 ms

7.4.4.3.6 查看宿主机的容器
#自动生成了weave相关的容器
[root@ubuntu1804 ~]#docker ps -a
CONTAINER ID    IMAGE             COMMAND        
CREATED       STATUS           PORTS        NAMES
9374a414902a    weaveworks/ubuntu       "/w/w /bin/bash"     5
minutes ago    Exited (0) 2 minutes ago            a1
0244d489c1df    weaveworks/weave:latest    "/home/weave/weaver …"  37
minutes ago   Up 37 minutes                 weave
7048529f3ac6    weaveworks/weaveexec:latest  "data-only"        37
minutes ago   Created                    weavevolumes-
latest
7fa02af2b482    weaveworks/weavedb:latest   "data-only"        37
minutes ago   Created                    weavedb
[root@centos7 ~]#docker ps -a
CONTAINER ID    IMAGE             COMMAND        
CREATED       STATUS       PORTS        NAMES
be519cda22f7    weaveworks/ubuntu       "/w/w /bin/bash"     3
minutes ago    Up 3 minutes              a2
ab2ebef16703    weaveworks/weave:latest    "/home/weave/weaver …"  31
minutes ago   Up 31 minutes              weave
f7b21ebd2b09    weaveworks/weaveexec:latest  "data-only"        31
minutes ago   Created                 weavevolumes-latest
b7879a1e77b8    weaveworks/weavedb:latest   "data-only"        31
minutes ago   Created                 weavedb

8、Docker 仓库管理

Docker仓库,类似于yum仓库,是用来保存镜像的仓库。为了方便的管理和使用docker镜像,可以将镜像集中保存至Docker仓库中,将制作好的镜像push到仓库集中保存,在需要镜像时,从仓库中pull镜像即可。

Docker 仓库分为公有云仓库和私有云仓库

公有云仓库: 由互联网公司对外公开的仓库

  • 官方
  • 阿里云等第三方仓库

私有云仓库: 组织内部搭建的仓库,一般只为组织内部使用,常使用下面软件搭建仓库

  • docker registory
  • docker harbor

8.1 官方 docker 仓库

将自制的镜像上传至docker仓库;https://hub.docker.com/

8.1.1 注册账户

8.1.2 使用用户仓库管理镜像

每个注册用户都可以上传和管理自已的镜像

8.1.2.1 用户登录

上传镜像前需要执行docker login命令登录,登录后生成~/.docker/config.json文件保存验证信息

格式

docker login [OPTIONS] [SERVER]
选项: 
-p, --password string  Password
    --password-stdin  Take the password from stdin
-u, --username string  Username

#交互式登录
[root@ubuntu1804 ~]#docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't
have a Docker ID, head over to https://hub.docker.com to create one.
Username: sunxianglearning
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded


#非交互式登录
[root@CT7test1 ~]# docker login  -u sunxianglearning -pxxxxxxx 
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

#登陆成功后自动生成的文件,为验证信息,下次会自动登录,而无需手动登录
[root@CT7test1 ~]# cat .docker/config.json 
{
    "auths": {
        "10.0.0.7": {
            "auth": "YWRtaW46MTIzNDVzeA=="
        },
        "https://index.docker.io/v1/": {
            "auth": "c3VueGlhbmdsZWFybmluZzpzeEBseDEzMTQ="
        }
    },
    "HttpHeaders": {
        "User-Agent": "Docker-Client/19.03.5 (linux)"
    }
}


8.1.2.2 给本地镜像打标签

上传本地镜像前必须先给上传的镜像用docker tag 命令打标签
标签格式: docker.io/用户帐号/镜像名:TAG

[root@CT7test1 ~]# docker tag alpine:latest docker.io/sunxianglearning/alpine:latest-v1
[root@CT7test1 ~]# docker images 
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
sunxianglearning/alpine         latest-v1           14119a10abf4        2 months ago        5.6MB

8.1.2.3 上传本地镜像至官网

#如tag省略,将上传指定REPOSITORY的所有版本,如下示例
[root@CT7test1 ~]# docker push docker.io/sunxianglearning/alpine:latest-v1
The push refers to repository [docker.io/sunxianglearning/alpine]
e2eb06d8af82: Mounted from library/alpine 
latest-v1: digest: sha256:69704ef328d05a9f806b6b8502915e6a0a4faa4d72018dc42343f511490daf8a size: 528


8.1.2.4 在官网验证上传的镜像

8.1.2.5 下载上传的镜像并创建容器

#下载镜像
[root@CT8test2 ~]# docker pull sunxianglearning/alpine:latest-v1
latest-v1: Pulling from sunxianglearning/alpine
a0d0a0d46f8b: Pull complete 
Digest: sha256:69704ef328d05a9f806b6b8502915e6a0a4faa4d72018dc42343f511490daf8a
Status: Downloaded newer image for sunxianglearning/alpine:latest-v1
docker.io/sunxianglearning/alpine:latest-v1


#创建容器
[root@CT8test2 ~]# docker run -it --name test1  sunxianglearning/alpine:latest-v1 sh 
/ # 


[root@CT8test2 ~]# docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS                             PORTS                       NAMES
31dfee104b0a        sunxianglearning/alpine:latest-v1     "sh"                     9 seconds ago       Up 8 seconds                                                   test1


8.1.3 使用组织管理镜像

组织类似于名称空间,每个组织的名称全网站唯一,一个组织可以有多个用户帐户使用,并且可以指定不同用户对组织内的仓库不同的权限

三种不同权限

  • Read-only: Pull and view repository details and builds
  • Read &Write: Pull, push, and view a repository; view, cancel, retry or trigger builds
  • Admin: Pull, push, view, edit, and delete a repository; edit build settings; update the repository description

8.1.3.1 创建组织

因为付费的所以我就算了

8.1.3.2 创建组织内的团队,并分配权限

8.1.3.3 上传镜像前登录帐号

8.1.3.4 给本地镜像打标签

8.1.3.5 上传镜像到指定的组织

8.1.3.6 在网站看查看上传的镜像

8.1.3.7 下载上传的镜像并运行容器

8.2 阿里云Docker仓库

8.2.1 注册和登录阿里云仓库

用浏览器访问http://cr.console.aliyun.com,输入注册的用户信息登录网站

8.2.2 设置仓库专用管理密码

8.2.3 创建仓库

此步可不事先执行,docker push 时可以自动创建私有仓库


查看仓库的路径用于上传镜像使用

8.2.4 阿里云登录以及镜像的上传

8.3 私有云单机仓库Docker Registry(基本不用不做介绍,如需了解看官方文档)

8.3.1 Docker Registry 介绍

Docker Registry作为Docker的核心组件之一负责单主机的镜像内容的存储与分发,客户端的docker pull以及push命令都将直接与registry进行交互,最初版本的registry 由Python实现,由于设计初期在安全性,性能以及API的设计上有着诸多的缺陷,该版本在0.9之后停止了开发,由新项目distribution(新的docker register被称为Distribution)来重新设计并开发下一代registry,新的项目由go语言开发,所有的API,底层存储方式,系统架构都进行了全面的重新设计已解决上一代registry中存在的问题,2016年4月份registry 2.0正式发布,docker 1.6版本开始支持registry 2.0,而八月份随着docker 1.8 发布,docker hub正式启用2.1版本registry全面替代之前版本 registry,新版registry对镜像存储格式进行了重新设计并和旧版不兼容,docker 1.5和之前的版本无法读取2.0的镜像,另外,Registry 2.4版本之后支持了回收站机制,也就是可以删除镜像了,在2.4版本之前是无法支持删除镜像的,所以如果你要使用最好是大于Registry 2.4版本的

官方文档地址: https://docs.docker.com/registry/
官方github 地址: https://github.com/docker/distribution
官方部署文档: https://github.com/docker/docker.github.io/blob/master/registry/deploying.md

8.4 ★★Docker 之分布式仓库 Harbor★★

8.4.1 Harbor 介绍和架构

8.4.1.1 Harbor 介绍

Harbor是一个用于存储和分发Docker镜像的企业级Registry服务器,由VMware开源,其通过添加一些企业必需的功能特性,例如安全、标识和管理等,扩展了开源 Docker Distribution。作为一个企业级私有Registry服务器,Harbor 提供了更好的性能和安全。提升用户使用Registry构建和运行环境传输镜像的效率。Harbor支持安装在多个Registry节点的镜像资源复制,镜像全部保存在私有 Registry 中,确保数据和知识产权在公司内部网络中管控,另外,Harbor也提供了高级的安全特性,诸如用户管理,访问控制和活动审计等

vmware 官方开源服务: https://vmware.github.io/

harbor 官方github 地址: https://github.com/vmware/harbor

harbor 官方网址: https://goharbor.io/

harbor 官方文档: https://goharbor.io/docs/

github文档: https://github.com/goharbor/harbor/tree/master/docs

8.4.1.2 Harbor功能官方介绍

  • 基于角色的访问控制: 用户与Docker镜像仓库通过“项目”进行组织管理,一个用户可以对多个镜像仓库在同一命名空间(project)里有不同的权限
  • 镜像复制: 镜像可在多个Registry实例中复制(同步)。尤其适合于负载均衡,高可用,混合云和多云的场景
  • 图形化用户界面: 用户可以通过浏览器来浏览,检索当前Docker镜像仓库,管理项目和命名空间
  • AD/LDAP 支: Harbor可以集成企业内部已有的AD/LDAP,用于鉴权认证管理
  • 审计管理: 所有针对镜像仓库的操作都可以被记录追溯,用于审计管理
  • 国际化: 已拥有英文、中文、德文、日文和俄文的本地化版本。更多的语言将会添加进来
  • RESTful API: 提供给管理员对于Harbor更多的操控, 使得与其它管理软件集成变得更容易
  • 部署简单: 提供在线和离线两种安装工具, 也可以安装到vSphere平台(OVA方式)虚拟设备

8.4.1.3 Harbor 组成

[root@CT8test2 ~]# docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS                             PORTS                       NAMES
becd1883207d        goharbor/nginx-photon:v1.10.9         "nginx -g 'daemon of…"   10 minutes ago      Up 10 minutes (unhealthy)          0.0.0.0:80->8080/tcp        nginx
fdcedf231f26        goharbor/harbor-jobservice:v1.10.9    "/harbor/harbor_jobs…"   10 minutes ago      Up 33 seconds (health: starting)                               harbor-jobservice
511d2de6972d        goharbor/harbor-core:v1.10.9          "/harbor/harbor_core"    10 minutes ago      Up 32 seconds (health: starting)                               harbor-core
e644d1d9b1ea        goharbor/registry-photon:v1.10.9      "/home/harbor/entryp…"   10 minutes ago      Up 10 minutes (healthy)            5000/tcp                    registry
7939661e7004        goharbor/harbor-portal:v1.10.9        "nginx -g 'daemon of…"   10 minutes ago      Up 10 minutes (healthy)            8080/tcp                    harbor-portal
15563e3e476e        goharbor/harbor-db:v1.10.9            "/docker-entrypoint.…"   10 minutes ago      Up 10 minutes (healthy)            5432/tcp                    harbor-db
585fee8981bc        goharbor/redis-photon:v1.10.9         "redis-server /etc/r…"   10 minutes ago      Up 10 minutes (healthy)            6379/tcp                    redis
d88cdaa5debc        goharbor/harbor-registryctl:v1.10.9   "/home/harbor/start.…"   10 minutes ago      Up 10 minutes (healthy)                                        registryctl
d799f1b5b1a7        goharbor/harbor-log:v1.10.9           "/bin/sh -c /usr/loc…"   10 minutes ago      Up 10 minutes (healthy)            127.0.0.1:1514->10514/tcp   harbor-log


  • Proxy: 对应启动组件nginx。它是一个nginx反向代理,代理Notary client(镜像认证)、Docker client(镜像上传下载等)和浏览器的访问请求(Core Service)给后端的各服务
  • UI(Core Service): 对应启动组件harbor-ui。底层数据存储使用mysql数据库,主要提供了四个子功能:
    • ​ UI: 一个web管理页面ui
    • ​ API: Harbor暴露的API服务
    • ​ Auth: 用户认证服务,decode后的token中的用户信息在这里进行认证;auth后端可以接db、ldap、uaa三种认证实现
    • ​ Token服务(上图中未体现): 负责根据用户在每个project中的role来为每一个docker push/pull命令发布一个token,如果从docker client发送给registry的请求没有带token,registry会重定向请求到token服务创建token
  • Registry: 对应启动组件registry。负责存储镜像文件,和处理镜像的pull/push命令。Harbor对镜像进行强制的访问控制,Registry会将客户端的每个pull、push请求转发到token服务来获取有效的token
  • Admin Service: 对应启动组件harbor-adminserver。是系统的配置管理中心附带检查存储用量,ui和jobserver启动时候需要加载adminserver的配置
  • Job Sevice: 对应启动组件harbor-jobservice。负责镜像复制工作的,他和registry通信,从一个registry pull镜像然后push到另一个registry,并记录job_log
  • Log Collector: 对应启动组件harbor-log。日志汇总组件,通过docker的log-driver把日志汇总到一起
  • DB: 对应启动组件harbor-db,负责存储project、 user、 role、replication、image_scan、access等的metadata数据

8.4.2 安装Harbor

下载地址: https://github.com/vmware/harbor/releases

安装文档: https://github.com/goharbor/harbor/blob/master/docs/install-config/_index.md

环境准备: 共四台主机

  • 两台主机harbor服务器,地址: 10.0.0.101|102
  • 两台主机harbor客户端上传和下载镜像

8.4.2.1 安装 docker

#直接刷之前创建的脚本就行
#harbor1用centos7作为宿主机
[root@CT7test1 ~]# cat /root/install_docker_centos7.sh
#!/bin/bash

COLOR="echo -e \\033[1;31m"
END="\033[m"
VERSION="19.03.5-3.el7"

wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo || { ${COLOR}"互联网连接失败,请检查网络配置!"${END};exit; }
yum clean all
yum -y install docker-ce-${VERSION} docker-ce-cli-${VERSION} || { ${COLOR}"Base,Extras的yum源失败,请检查yum源配置"${END};exit; }

#使用阿里做镜像加速
mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}
EOF

systemctl enable --now docker
docker version && ${COLOR}"Docker安装成功"${END} || ${COLOR}"Docker安装失败"${END}

[root@CT7test1 ~]# 

[root@CT7test1 ~]# docker version 
Client: Docker Engine - Community
 Version:           19.03.5
 API version:       1.40
 Go version:        go1.12.12
 Git commit:        633a0ea
 Built:             Wed Nov 13 07:25:41 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.5
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.12
  Git commit:       633a0ea
  Built:            Wed Nov 13 07:24:18 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.11
  GitCommit:        5b46e404f6b9f661a205e28d59c982d3634148f8
 runc:
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
[root@CT7test1 ~]# 


#harbor2用centos8作为宿主机
[root@CT8test2 ~]# cat /root/install_docker_centos8.sh 
#!/bin/bash

. /etc/init.d/functions
COLOR="echo -e \\033[1;32m"
END="\033[m"
DOCKER_VERSION="-19.03.13-3.el8"

install_docker() {
rpm -q docker-ce &> /dev/null && action "Docker已安装" && exit 
${COLOR}"开始安装 Docker....."${END}
sleep 1

# step 1: 安装必要的一些系统工具
yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3:修改仓库源信息
 sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# Step 4: 更新并安装Docker-CE
yum makecache 
yum -y install  docker-ce${DOCKER_VERSION} docker-ce-cli${DOCKER_VERSION}
# Step 4: 开启Docker服务
systemctl enable --now docker
}
install_docker
[root@CT8test2 ~]# 


[root@CT8test2 ~]# docker version 
Client: Docker Engine - Community
 Version:           19.03.13
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        4484c46d9d
 Built:             Wed Sep 16 17:02:36 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.13
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       4484c46d9d
  Built:            Wed Sep 16 17:01:11 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.11
  GitCommit:        5b46e404f6b9f661a205e28d59c982d3634148f8
 runc:
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
[root@CT8test2 ~]# 



8.4.2.2 先安装docker compose

#docker compose 必须先于harbor安装,否则会报以下错误
[root@ubuntu1804 ~]#/apps/harbor/install.sh
[Step 0]: checking installation environment ...
Note: docker version: 19.03.5
✖ Need to install docker-compose(1.7.1+) by yourself first and run this script again
[root@ubuntu1804 ~]#

安装docker compose

1、按官网的方式去安装
在https://docs.docker.com/install/页面,如下图,左边选Docker Compose–>install Compose,右边选择Linux(说明一下,Max、Windows系统中Docker安装后,Docker Compose自带的,不需要另外安装的),然后安装文档的的说明,执行四步操作就可以了
# 下载docker compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.25.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# 添加可执行权限
sudo chmod +x /usr/local/bin/docker-compose
# 将文件copy到 /usr/bin/目录下
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
# 查看版本
docker-compose --version

2、通过pip进行安装(不知道是因为网络问题还是怎么回事一直无法正常安装)
#安装pip
yum -y install epel-release
yum -y install python-pip
#查看版本
pip --version
#更新pip
pip install --upgrade pip
#安装docker-compose
pip install docker-compose 
#查看docker compose的版本
docker-compose version


3、离线安装
访问https://github.com/docker/compose/releases,下载 docker-compose-Linux-x86_64,我是复制链接地址,直接通过wget下载到设备上,将docker-compose-Linux-x86_64重命名为docker-compose
# 添加可执行权限
sudo chmod +x /usr/local/bin/docker-compose
# 查看docker-compose版本
docker-compose -v

8.4.2.3 下载Harbor安装包并解压缩

安装包官方网站:https://github.com/goharbor/harbor/releases

以下使用 harbor 稳定版本1.10.9 安装包
方法1: 下载离线完整安装包,推荐使用

https://github.com/goharbor/harbor/releases/download/v1.10.9/harbor-offline-installer-v1.10.9.tgz

方法2: 下载在线安装包 ,比较慢,不是很推荐

https://github.com/goharbor/harbor/releases/download/v1.10.9/harbor-online-installer-v1.10.9.tgz

下载完成后解压缩包

[root@CT8test2 ~]# ll /root/har*
-rw-r--r--. 1 root root 597803208 Nov  9 11:52 /root/harbor-offline-installer-v1.10.9.tgz
-rw-r--r--. 1 root root      8487 Oct 29 11:23 /root/harbor-online-installer-v1.10.9.tgz

[root@CT8test2 ~]# mkdir /apps
[root@CT8test2 ~]# tar xvf harbor-offline-installer-v1.10.9.tgz -C /apps/
harbor/harbor.v1.10.9.tar.gz
harbor/prepare
harbor/LICENSE
harbor/install.sh
harbor/common.sh
harbor/harbor.yml

[root@CT8test2 ~]# ll /apps/
total 0
drwxr-xr-x. 2 root root 118 Nov  9 14:10 harbor


8.4.2.4 编辑配置文件 harbor.yml

最新文档:https://goharbor.io/docs/2.4.0/install-config/configure-yml-file/

[root@CT8test2 ~]# vim /apps/harbor/harbor.yml 
#只需要修改下面两行
hostname = 10.0.0.7  #修改此行,指向当前主机IP 或 FQDN
harbor_admin_password = 123456 #修改此行指定harbor登录用户admin的密码,默认用户/密码:admin/Harbor12345


#可选项
ui_url_protocol = http #默认即可,如果修改为https,需要指定下面证书路径
ssl_cert = /data/cert/server.crt #默认即可,https时,需指定下面证书文件路径
ss_cert_key = /data/cert/server.key  #默认即可,https时,需指定下面私钥文件路径

8.4.2.5 运行 harbor 安装脚本

#先安装python(不知道应该安装python2还是3,反正我就直接装了python2,默认也不知道是那个版本)
[root@CT8test2 ~]# yum install -y python2.x86_64 
#安装docker harbor
[root@CT8test2 ~]# /apps/harbor/install.sh 

[Step 0]: checking if docker is installed ...

Note: docker version: 19.03.13

[Step 1]: checking docker-compose is installed ...

Note: docker-compose version: 1.29.2

[Step 2]: loading Harbor images ...
Loaded image: goharbor/harbor-core:v1.10.9
Loaded image: goharbor/harbor-jobservice:v1.10.9
Loaded image: goharbor/notary-signer-photon:v1.10.9
Loaded image: goharbor/nginx-photon:v1.10.9
Loaded image: goharbor/chartmuseum-photon:v1.10.9
Loaded image: goharbor/registry-photon:v1.10.9
Loaded image: goharbor/clair-photon:v1.10.9
Loaded image: goharbor/clair-adapter-photon:v1.10.9
Loaded image: goharbor/prepare:v1.10.9
Loaded image: goharbor/harbor-portal:v1.10.9
Loaded image: goharbor/harbor-db:v1.10.9
Loaded image: goharbor/notary-server-photon:v1.10.9
Loaded image: goharbor/harbor-log:v1.10.9
Loaded image: goharbor/harbor-registryctl:v1.10.9
Loaded image: goharbor/redis-photon:v1.10.9


[Step 3]: preparing environment ...

[Step 4]: preparing harbor configs ...
prepare base dir is set to /apps/harbor
/usr/src/app/utils/configs.py:100: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  configs = yaml.load(f)
WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https
/usr/src/app/utils/configs.py:90: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  versions = yaml.load(f)
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /secret/keys/secretkey
Generated certificate, key file: /secret/core/private_key.pem, cert file: /secret/registry/root.crt
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir



[Step 5]: starting Harbor ...
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating registry      ... done
Creating harbor-portal ... done
Creating redis         ... done
Creating harbor-db     ... done
Creating registryctl   ... done
Creating harbor-core   ... done
Creating harbor-jobservice ... done
Creating nginx             ... done
✔ ----Harbor has been installed and started successfully.----
[root@CT8test2 ~]# 


http方式登录
方案一:
#注释掉harbor.yml中https的内容,否则运行脚本会报错
[root@CT8test2 ~]# vim /apps/harbor/harbor.yml
# https related config
#https:
  # https port for harbor, default is 443
  # port: 443
  # The path of cert and key files for nginx
  #certificate: /your/certificate/path
  #private_key: /your/private/key/path


#在服务文件中添加选项--insecure-registry=10.0.0.7(10.0.0.7为harbor.yml中hostname字段的内容)
[root@CT8test2 ~]# vim /usr/lib/systemd/system/docker.service 
[Service]
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry=10.0.0.7

#重新加载配置文件,重启服务
[root@CT8test2 ~]# systemctl daemon-reload 
[root@CT8test2 ~]# systemctl restart docker

#重启docker-compose服务
/usr/bin/docker-compose -f /apps/harbor/docker-compose.yml down
/usr/bin/docker-compose -f /apps/harbor/docker-compose.yml up


方案二:(这个官方文档上的不知道[]内应该填什么,没尝试成功)
#修改/etc/docker/daemon.json文件,添加以下内容
{
"insecure-registries" : ["myregistrydomain.com:5000", "0.0.0.0"]
}

#重启docker服务
systemctl restart docker

#重启服务docker-compose服务
/usr/bin/docker-compose -f /apps/harbor/docker-compose.yml down
/usr/bin/docker-compose -f /apps/harbor/docker-compose.yml up

https方式登录(见8.4.5内容)

8.4.2.5 实现开机自动启动 harbor

8.4.2.5.1 方法1: 通过service文件实现
[root@CT8test2 ~]# vim /lib/systemd/system/harbor.service
[root@CT8test2 ~]# cat /lib/systemd/system/harbor.service 
[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=http://github.com/vmware/harbor

[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/bin/docker-compose -f /apps/harbor/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f /apps/harbor/docker-compose.yml down

[Install]
WantedBy=multi-user.target


[root@CT8test2 ~]# systemctl daemon-reload 
[root@CT8test2 ~]# systemctl enable --now harbor.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/harbor.service to /usr/lib/systemd/system/harbor.service.
[root@CT8test2 ~]# systemctl status harbor.service 
● harbor.service - Harbor
   Loaded: loaded (/usr/lib/systemd/system/harbor.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2021-11-10 11:45:45 CST; 11s ago
     Docs: http://github.com/vmware/harbor
 Main PID: 27209 (docker-compose)
    Tasks: 12
   Memory: 61.5M
   CGroup: /system.slice/harbor.service
           ├─27209 /usr/bin/docker-compose -f /apps/harbor/docker-compose.yml up
           └─27210 /usr/bin/docker-compose -f /apps/harbor/docker-compose.yml up

Nov 10 11:45:47 CT7test1 docker-compose[27209]: redis          | 1:M 10 Nov 03:45:46.964 * ...ds
Nov 10 11:45:47 CT7test1 docker-compose[27209]: redis          | 1:M 10 Nov 03:45:46.964 * ...ns
Nov 10 11:45:47 CT7test1 docker-compose[27209]: registry       | ls: /harbor_cust_cert: No ...ry
Nov 10 11:45:47 CT7test1 docker-compose[27209]: registry       | time="2021-11-10T03:45:46.....m
Nov 10 11:45:47 CT7test1 docker-compose[27209]: registry       | time="2021-11-10T03:45:46.....m
Nov 10 11:45:47 CT7test1 docker-compose[27209]: registry       | time="2021-11-10T03:45:46....1"
Nov 10 11:45:47 CT7test1 docker-compose[27209]: registry       | time="2021-11-10T03:45:46.....m
Nov 10 11:45:47 CT7test1 docker-compose[27209]: registry       | 172.18.0.8 - - [10/Nov/202...1"
Nov 10 11:45:47 CT7test1 docker-compose[27209]: registryctl    | ls: /harbor_cust_cert: No ...ry
Nov 10 11:45:47 CT7test1 docker-compose[27209]: registryctl    | 172.18.0.8 - - [10/Nov/202... 9
Hint: Some lines were ellipsized, use -l to show in full.



8.4.2.5.2 方法2: 通过 rc.local实现
[root@CT8test2 ~]# cat /etc/rc.local
#!/bin/bash
cd /apps/harbor
/usr/bin/docker-compose up
[root@CT8test2 ~]# chmod +x /etc/rc.local

8.4.2.6 登录 harbor 主机网站

用浏览器访问: http://10.0.0.7/
用户名: admin
密码: 即前面harbor.ymal中指定的密码

8.4.2.7 实战案例: 一键安装Harbor脚本

8.4.3 使用单主机 harbor

8.4.3.1 建立项目

harbor上必须先建立项目,才能上传镜像

8.4.3.2 命令行登录 harbor

#修改配置文件增加 --insecure-registry 10.0.0.7 使得可以http登录
[root@CT7test1 ~]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
 --insecure-registry 10.0.0.7 

[root@CT7test1 ~]# systemctl daemon-reload
[root@CT7test1 ~]# systemctl restart docke
[root@CT7test1 ~]# docker login 10.0.0.7
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

#查看进程是否添加上面设置
[root@CT7test1 ~]# ps aux | grep dockerd
root      17192  0.2  4.5 660416 85116 ?        Ssl  11:17   0:30 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry=10.0.0.7
root     114034  0.0  0.0 112812   964 pts/0    R+   15:27   0:00 grep --color=auto dockerd

[root@CT7test1 ~]# cat .docker/config.json 
{
    "auths": {
        "10.0.0.7": {
            "auth": "YWRtaW46MTIzNDVzeA=="
        }
    },
    "HttpHeaders": {
        "User-Agent": "Docker-Client/19.03.5 (linux)"
    }
}


8.4.3.3 给本地镜像打标签并上传到harbor

修改 images 的名称,不修改成指定格式无法将镜像上传到 harbor 仓库
格式为:

Harbor主机IP/项目名/image名字:版本

#上传镜像前,必须先登录harbor
[root@CT7test1 ~]# docker login 10.0.0.7
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded



[root@CT7test1 ~]# docker  pull alpine
Using default tag: latest
latest: Pulling from library/alpine
a0d0a0d46f8b: Pull complete 
Digest: sha256:e1c082e3d3c45cccac829840a25941e679c25d438cc8412c2fa221cf1a824e6a
Status: Downloaded newer image for alpine:latest
docker.io/library/alpine:latest
[root@CT7test1 ~]# docker images alpine
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
alpine              latest              14119a10abf4        2 months ago        5.6MB

#制作带有标签的镜像
[root@CT7test1 ~]# docker tag alpine:latest 10.0.0.7/test1/alpine-test:latest

#上传带有标签的镜像
[root@CT7test1 ~]# docker push 10.0.0.7/test1/alpine-test:latest 
The push refers to repository [10.0.0.7/test1/alpine-test]
e2eb06d8af82: Pushed 
latest: digest: sha256:69704ef328d05a9f806b6b8502915e6a0a4faa4d72018dc42343f511490daf8a size: 528


访问harbor网站验证上传镜像成功

如果不事先建立项目,上传镜像失败

[root@CT7test1 ~]# docker tag alpine:latest 10.0.0.7/test2/alpine-test:latest
[root@CT7test1 ~]# docker push 10.0.0.7/test2/alpine-test:latest 
The push refers to repository [10.0.0.7/test2/alpine-test]
e2eb06d8af82: Preparing 
denied: requested access to the resource is denied
[root@CT7test1 ~]# 


8.4.3.4 下载harbor的镜像

下载前必须修改docker的service 文件,加入harbor服务器的地址才可以下载

#修改docker的service文件,加入harbor服务的地址
root@ubuntu1804:/# cat /lib/systemd/system/docker.service 
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --bip=192.168.0.1/24 --insecure-registry 10.0.0.7 

root@ubuntu1804:/# systemctl daemon-reload 
root@ubuntu1804:/# systemctl restart docker.service 
root@ubuntu1804:/# docker login 10.0.0.7
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded


#从harbor下载镜像
root@ubuntu1804:/# docker pull 10.0.0.7/test1/alpine-test
Using default tag: latest
latest: Pulling from test1/alpine-test
a0d0a0d46f8b: Pull complete 
Digest: sha256:69704ef328d05a9f806b6b8502915e6a0a4faa4d72018dc42343f511490daf8a
Status: Downloaded newer image for 10.0.0.7/test1/alpine-test:latest
10.0.0.7/test1/alpine-test:latest
root@ubuntu1804:/# docker images 10.0.0.7/test1/alpine-test
REPOSITORY                   TAG                 IMAGE ID            CREATED             SIZE
10.0.0.7/test1/alpine-test   latest              14119a10abf4        2 months ago        5.6MB
root@ubuntu1804:/# 


8.4.3.5 创建自动打标签上传镜像脚本

#修改以前的build.sh脚本,直接生成镜像然后上传
root@ubuntu1804:/data/dockerfile/web/nginx1# cat build.sh 
#!/bin/bash
#
TAG=$1
docker build -t centos7-nginx:$TAG .
docker tag centos7-nginx:$TAG 10.0.0.7/test1/centos7-nginx:$TAG
docker push 10.0.0.7/test1/centos7-nginx:$TAG

root@ubuntu1804:/data/dockerfile/web/nginx1# bash build.sh 8.8


登录harbor网站验证脚本上传镜像成功

8.4.3.6 修改 harbor 配置

方法一

#停止docker-compose
[root@CT7test1 ~]# docker-compose stop

#确认所有相关容器都停止
[root@CT7test1 ~]# docker ps -a

#修改harbor配置
[root@CT7test1 ~]# vim harbor.yml

#更新配置
[root@CT7test1 ~]# /apps/harbor/prepare

#重新启动docker-compose
[root@CT7test1 ~]# systemctl restart harbor

#相关容器自动启动
[root@CT7test1 ~]# docker ps

方法二

[root@CT7test1 ~]# /apps/harbor/install.sh

8.4.4 实现 harbor 高可用

Harbor支持基于策略的Docker镜像复制功能,这类似于MySQL的主从同步,其可以实现不同的数据中心、不同的运行环境之间同步镜像,并提供友好的管理界面,大大简化了实际运维中的镜像管理工作,已经有用很多互联网公司使用harbor搭建内网docker仓库的案例,并且还有实现了双向复制功能

8.4.4.1 安装第二台 harbor主机

参考8.4.2的过程,在第二台主机上安装部署好harbor,并登录系统。

此主机的ip为10.0.0.110

8.4.4.2 第二台harbor上新建项目

参考第一台harbor服务器的项目名称,在第二台harbor服务器上新建与之同名的项目

8.4.4.3 第二台harbor上仓库管理中新建目标

参考第一台主机信息,新建复制(同步)目标信息,将第一台主机设为复制的目标

输入第一台harbor服务器上的主机和用户信息

8.4.4.4 第二台harbor上新建复制规则实现到第一台harbor的单向复制

8.4.4.5 在第一台harbor主机上重复上面操作

8.4.4.6 确认同步成功

8.4.4.7 上传镜像观察是否可以双高同步

root@ubuntu1804:~# docker tag ubuntu:1.0 10.0.0.7/test1/ubuntu:1.0
root@ubuntu1804:~# docker push 10.0.0.7/test1/ubuntu:1.0 
root@ubuntu1804:~# docker tag ubuntu:18.04 10.0.0.110/test1/ubuntu:18.04
root@ubuntu1804:~# docker push 10.0.0.110/test1/ubuntu:18.04 


上传同步成功!

8.4.4.8 删除镜像观察是否可自动同步

删除同步成功!

8.4.5 harbor 安全 https 配置

8.4.5.1 实现Harbor的 https 认证

与之前相同的部分
    安装docker
    安装docker compose
    下载harbor离线安装包并且解压缩

与之前不同部分
#生成私钥和证书
[root@ubuntu1804 ~]#touch /root/.rnd
[root@ubuntu1804 ~]#mkdir /apps/harbor/certs/
[root@ubuntu1804 ~]#cd /apps/harbor/certs/

#生成CA证书
[root@ubuntu1804 certs]#openssl req -newkey rsa:4096 -nodes -sha256 -keyout ca.key -x509 -subj "/CN=ca.test.org" -days 365 -out ca.crt

#生成harbor主机的证书申请
[root@ubuntu1804 certs]#openssl req -newkey rsa:4096 -nodes -sha256 -subj "/CN=harbor.test.org" -keyout harbor.test.org.key -out harbor.test.org.csr

#给harbor主机颁发证书
[root@ubuntu1804 certs]#openssl x509 -req -in harbor.test.org.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out harbor.test.org.crt

[root@ubuntu1804 ~]#tree /apps/harbor/certs
/apps/harbor/certs
├── ca.crt
├── ca.key
├── ca.srl
├── harbor.test.org.crt
├── harbor.test.org.csr
└── harbor.test.org.key
0 directories, 6 files
[root@ubuntu1804 ~]# vim /apps/harbor/harbor.yml 
hostname: 10.0.0.10

https:
certificate: /apps/harbor/certs/harbor.test.org.crt
private_key: /apps/harbor/certs/harbor.test.org.key

harbor_admin_password = 12345sx

[root@ubuntu1804 ~]#apt -y install python
[root@ubuntu1804 ~]#/apps/harbor/install.sh

#修改hosts文件
10.0.0.10 harbor.test.org

8.4.5.2 用https方式访问harbor网站

打开浏览器,访问https://10.0.0.10/ ,也可以访问https://harbor.test.org可以看到以下界面

查看证书

8.4.5.3 在harbor网站新建项目

8.4.5.4 在客户端下载CA的证书

直接登录和上传下载镜像会报错

root@ubuntu1804:~# vim /apps/harbor/harbor.yml 
root@ubuntu1804:~# docker login 10.0.0.10
Username: admin
Password: 
Error response from daemon: Get https://10.0.0.10/v2/: x509: certificate has expired or is not yet valid


在客户端下载ca的证书

root@ubuntu1804:~# mkdir -pv /etc/docker/certs.d/harbor.test.org/
mkdir: created directory '/etc/docker/certs.d'
mkdir: created directory '/etc/docker/certs.d/harbor.test.org/'

root@ubuntu1804:~# scp -r 10.0.0.10:/apps/harbor/certs/ca.crt /etc/docker/certs.d/harbor.test.org/
The authenticity of host '10.0.0.10 (10.0.0.10)' can't be established.
ECDSA key fingerprint is SHA256:us+JBBSkLg1jknvOIWR3YaDGJ/X6RNKDrVrVksCmko4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.10' (ECDSA) to the list of known hosts.
root@10.0.0.10's password: 
ca.crt                                                        100% 1793     2.1MB/s   00:00    
root@ubuntu1804:~# tree /etc/docker/certs.d/
/etc/docker/certs.d/
└── harbor.test.org
    └── ca.crt

1 directory, 1 file
root@ubuntu1804:~# 


在客户端再次登录

root@ubuntu1804:/# docker login harbor.test.org
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded


9、单机编排之Docker Compose(已经被k8s代替,此处简单了解即可)

9.1 Docker Compse介绍

当在宿主机启动较多的容器时候,如果都是手动操作会觉得比较麻烦而且容易出错,此时推荐使用docker 单机编排工具 docker-compose

docker-compose 是 docker 容器的一种单机编排服务,docker-compose 是一个管理多个容器的工具,比如: 可以解决容器之间的依赖关系,就像启动一个nginx 前端服务的时候会调用后端的tomcat,那就得先启动tomcat,但是启动tomcat 容器还需要依赖数据库,那就还得先启动数据库,docker-compose 可以用来解决这样的嵌套依赖关系,并且可以替代docker命令对容器进行创建、启动和停止等手工的操作

因此,如果说docker命令就像linux的命令,docker compse就像shell脚本,可以自动的执行容器批量操作,从而实现自动化的容器管理,或者说docker命令相当于ansible命令,那么docker compose文件,就相当于ansible-playbook的yaml文件
docker-compose 项目是Docker 官方的开源项目,负责实现对Docker 容器集群的快速编排,docker-compose 将所管理的容器分为三层,分别是工程(project),服务(service)以及容器(container)

github地址: https://github.com/docker/compose

官方地址: https://docs.docker.com/compose/

9.2 安装和准备

9.2.1 安装Docker Compose

9.2.1.1 方法1: 通过pip安装

python-pip 包将安装一个 pip 的命令,pip 命令是一个pyhton 安装包的安装工具,其类似于ubuntu的apt 或者 redhat 的yum,但是pip 只安装 python 相关的安装包,可以在多种操作系统安装和使用pip

此方式当前安装的版本较新,为docker_compose-1.25.3,推荐使用

Ubuntu: 
# apt update
# apt install -y python-pip

Centos: 
# yum install epel-release
# yum install -y python-pip
# pip install --upgrade pip

9.2.1.2 方法2: 直接从github下载安装对应版本

参看说明: https://github.com/docker/compose/releases

此方法安装版本可方便指定,推荐方法,但网络下载较慢

[root@ubuntu1804 ~]#curl -L https://github.com/docker/compose/releases/download/1.25.3/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
[root@ubuntu1804 ~]#chmod +x /usr/local/bin/do

9.2.1.3 方法3: 直接从包仓库安装

此方法安装的版本较旧,不推荐使用

#ubuntu安装
[root@ubuntu1804 ~]#apt -y install docker-compose
[root@ubuntu1804 ~]#docker-compose --version
docker-compose version 1.17.1, build unknown

#CentOS7安装,依赖EPEL源
[root@centos7 ~]#yum -y install docker-compose
[root@centos7 ~]#docker-compose --version
docker-compose version 1.18.0, buil 8dd22a9

9.2.2 查看命令格式

官方文档: https://docs.docker.com/compose/reference/

[root@Centos7 ~]# docker-compose help
Define and run multi-container applications with Docker.

Usage:
  docker-compose [-f <arg>...] [--profile <name>...] [options] [--] [COMMAND] [ARGS...]
  docker-compose -h|--help

#选项说明:
 -f,–file FILE                  #指定Compose 模板文件,默认为docker-compose.yml
-p,–project-name NAME           #指定项目名称,默认将使用当前所在目录名称作为项目名。
--verbose                       #显示更多输出信息
--log-level LEVEL               #定义日志级别 (DEBUG, INFO, WARNING, ERROR, CRITICAL)
--no-ansi                       #不显示ANSI 控制字符
-v, --version                   #显示版本


#以下为命令选项,需要在docker-compose.yml|yaml 文件所在在目录里执行
build                           #构建镜像
bundle                          #从当前docker compose 文件生成一个以<当前目录>为名称的json格式的Docker Bundle 备份文件
config  -q                      #查看当前配置,没有错误不输出任何信息
create                          #创建服务,较少使用
down                            #停止和删除所有容器、网络、镜像和卷
events                          #从容器接收实时事件,可以指定json 日志格式,较少使用
exec                            #进入指定容器进行操作
help                            #显示帮助细信息
images                          #显示镜像信息,较少使用
kill                            #强制终止运行中的容器
logs                            #查看容器的日志
pause                           #暂停服务
port                            #查看端口
ps                              #列出容器,较少使用
pull                            #重新拉取镜像,镜像发生变化后,需要重新拉取镜像,较少使用
push                            #上传镜像
restart                         #重启服务,较少使用
rm                              #删除已经停止的服务
run                             #一次性运行容器
scale                           #设置指定服务运行的容器个数
start                           #启动服务 ,较少使用
stop                            #停止服务,较少使用
top                             #显示容器运行状态
unpause                         #取消暂定
up                              #创建并启动容器 ,较少使用

9.2.3 docker compse 文件格式

官方文档: https://docs.docker.com/compose/compose-file/

docker compose 文件是一个yaml格式的文件,所以注意行首的缩进很严格

默认docker-compose命令会调用当前目录下的docker-compose.yml的文件,因此一般执行docker-compose命令前先进入docker-compose.yml文件所在目录

docker compose有很多版本,版本不同,语法和格式有所不同

9.3 从 docker compose 启动单个容器

注意: 使用Docker compose之前,先要安装docker

9.3.1 创建 docker compose文件

docker compose 文件可在任意目录,创建文件名为docker-compose.yml 配置文件,要注意前后的缩进

[root@Centos7 ~]# docker-compose --version
docker-compose version 1.29.2, build 5becea4c
[root@Centos7 ~]# mkdir /data/docker-compose
[root@Centos7 ~]# cd /data/docker-compose/
[root@Centos7 docker-compose]# vim docker-compose.yml
[root@Centos7 docker-compose]# cat docker-compose.yml 
service-alpine:         #改此行,最后的”:"删除 
  image: alpine:3.13.6
  container_name: alpine1


9.3.2 查看配置和格式检查

[root@Centos7 docker-compose]# docker-compose config
services:
  service-alpine:
    container_name: alpine1
    image: alpine:3.13.6
    network_mode: bridge
version: '1'

[root@Centos7 docker-compose]# docker-compose config -q

#改错ocker-compose文件格式
[root@ubuntu1804 docker-compose]#vim docker-compose.yml
service-alpine:         #改此行,最后的”:"删除 
  image: alpine:3.13.6
  container_name: alpine1


[root@Centos7 docker-compose]# docker-compose config -q
ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
  in "./docker-compose.yml", line 2, column 8


9.3.3 启动容器

注意: 必须要在docker compose文件所在的目录执行

#前台启动
[root@Centos7 docker-compose]# docker-compose up
Pulling service-alpine (alpine:3.13.6)...
3.13.6: Pulling from library/alpine
4e9f2cdf4387: Pull complete
Digest: sha256:2582893dec6f12fd499d3a709477f2c0c0c1dfcd28024c93f1f0626b9e3540c8
Status: Downloaded newer image for alpine:3.13.6
Creating alpine1 ... done
Attaching to alpine1
alpine1 exited with code 0

#以上是前台执行不退出

9.3.4 验证docker compose执行结果

#上面命令是前台执行,所以要查看结果,可以再开一个终端窗口进行观察
[root@Centos7 docker-compose]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
98e593281314        alpine:3.13.6       "/bin/sh"           11 minutes ago      Exited (0) 11 minutes ago                       alpine1



[root@Centos7 docker-compose]# docker-compose ps
 Name     Command   State    Ports
----------------------------------
alpine1   /bin/sh   Exit 0        



9.3.5 结束前台执行

[root@ubuntu1804 docker-compose]#docker-compose up
Pulling service-nginx-web (10.0.0.102/example/nginx-centos7-base:1.6.1)...
1.6.1: Pulling from example/nginx-centos7-base
f34b00c7da20: Pull complete
544476d462f7: Pull complete
39345915aa1b: Pull complete
d5376f2bbd9e: Pull complete
4596aecee927: Pull complete
1617b995c379: Pull complete
d00df95be654: Pull complete
Digest: sha256:82e9e7d8bf65e160ba79a92bb25ae42cbbf791092d1e09fb7de25f91b31a21ff
Status: Downloaded newer image for 10.0.0.102/example/nginx-centos7-base:1.6.1
Creating nginx-web ... done
Attaching to nginx-web
^CGracefully stopping... (press Ctrl+C again to force)  #ctrl+c键,结束容器
Stopping nginx-web ... done
[root@ubuntu1804 docker-compose]#docker-compose ps
Name       Command      State  Ports
---------------------------------------------------
nginx-web  /apps/nginx/sbin/nginx  Exit 0    


#关闭容器
[root@ubuntu1804 docker-compose]#docker-compose kill
Killing nginx-web ... done
[root@ubuntu1804 docker-compose]#docker-compose ps
Name       Command      State   Ports
-----------------------------------------------------
nginx-web  /apps/nginx/sbin/nginx  Exit 137

9.3.6 删除容器

#只删除停止的容器
[root@ubuntu1804 docker-compose]#docker-compose rm
Going to remove nginx-web
Are you sure? [yN] y
Removing nginx-web ... done
[root@ubuntu1804 docker-compose]#docker-compose up -d
Creating nginx-web ... done
[root@ubuntu1804 docker-compose]#docker-compose rm
No stopped containers

#停止并删除容器及镜像
[root@ubuntu1804 docker-compose]#docker-compose down
Stopping nginx-web ... done
Removing nginx-web ... done
[root@ubuntu1804 docker-compose]#docker-compose ps
Name  Command  State  Ports
------------------------------
[root@ubuntu1804 docker-compose]#docker ps -a
CONTAINER ID    IMAGE        COMMAND       CREATED      
STATUS       PORTS        NAMES

#也会自动删除镜像
[root@ubuntu1804 docker-compose]#docker-compose images
Container  Repository  Tag  Image Id  Size
----------------------------------------------

9.3.7 后台执行

[root@ubuntu1804 docker-compose]#docker-compose up -d
Creating nginx-web ... done
[root@ubuntu1804 docker-compose]#docker-compose ps
Name       Command      State          Ports      
  
-------------------------------------------------------------------------------------
nginx-web  /apps/nginx/sbin/nginx  Up    0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
[root@ubuntu1804 docker-compose]#curl 127.0.0.1/app/
Test Page in app
[root@ubuntu1804 docker-compose]#curl http://127.0.0.1/app/
Test Page in app

9.3.8 停止和启动与日志查看

[root@ubuntu1804 docker-compose]#docker-compose stop
Stopping nginx-web ... done
[root@ubuntu1804 docker-compose]#docker-compose ps
Name       Command      State  Ports
---------------------------------------------------
nginx-web  /apps/nginx/sbin/nginx  Exit 0
[root@ubuntu1804 docker-compose]#docker-compose start
Starting service-nginx-web ... done
[root@ubuntu1804 docker-compose]#docker-compose ps
Name       Command      State          Ports      
  
-------------------------------------------------------------------------------------
nginx-web  /apps/nginx/sbin/nginx  Up    0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
[root@ubuntu1804 docker-compose]#docker-compose restart
Restarting nginx-web ... done
[root@ubuntu1804 docker-compose]#docker-compose ps
Name       Command      State          Ports      
  
-------------------------------------------------------------------------------------
nginx-web  /apps/nginx/sbin/nginx  Up    0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp

#执行上面操作时,可以同时开一个终端,观察日事件
[root@ubuntu1804 docker-compose]#docker-compose events
2020-02-04 15:38:13.253822 container kill
5d92e4da8679a973145e5b4db364ae8cf8596a03c4fd0b3b6a28213a2f155be6
(image=10.0.0.102/example/nginx-centos7-base:1.6.1, name=nginx-web)
2020-02-04 15:38:13.531208 container die
5d92e4da8679a973145e5b4db364ae8cf8596a03c4fd0b3b6a28213a2f155be6
(image=10.0.0.102/example/nginx-centos7-base:1.6.1, name=nginx-web)
2020-02-04 15:38:13.631137 container stop
5d92e4da8679a973145e5b4db364ae8cf8596a03c4fd0b3b6a28213a2f155be6
(image=10.0.0.102/example/nginx-centos7-base:1.6.1, name=nginx-web)
2020-02-04 15:38:15.137495 container start
5d92e4da8679a973145e5b4db364ae8cf8596a03c4fd0b3b6a28213a2f155be6
(image=10.0.0.102/example/nginx-centos7-base:1.6.1, name=nginx-web)
2020-02-04 15:38:15.137546 container restart
5d92e4da8679a973145e5b4db364ae8cf8596a03c4fd0b3b6a28213a2f155be6
(image=10.0.0.102/example/nginx-centos7-base:1.6.1, name=nginx-web)

#以json格式显示日志
[root@ubuntu1804 docker-compose]#docker-compose events --json
{"time": "2020-02-04T15:48:22.423539", "type": "container", "action": "kill",
"id": "19d72e9bc85842d8879d7dcf2a3d2defd79a5a0c3c3d974ddfbbbc6e95bf910b",
"service": "service-nginx-web", "attributes": {"name": "nginx-web", "image":
"10.0.0.102/example/nginx-centos7-base:1.6.1"}}
{"time": "2020-02-04T15:48:22.537200", "type": "container", "action":
"exec_die", "id":
"19d72e9bc85842d8879d7dcf2a3d2defd79a5a0c3c3d974ddfbbbc6e95bf910b", "service":
"service-nginx-web", "attributes": {"name": "nginx-web", "image":
"10.0.0.102/example/nginx-centos7-base:1.6.1"}}
{"time": "2020-02-04T15:48:22.745670", "type": "container", "action": "die",
"id": "19d72e9bc85842d8879d7dcf2a3d2defd79a5a0c3c3d974ddfbbbc6e95bf910b",
"service": "service-nginx-web", "attributes": {"name": "nginx-web", "image":
"10.0.0.102/example/nginx-centos7-base:1.6.1"}}
{"time": "2020-02-04T15:48:22.863375", "type": "container", "action": "stop",
"id": "19d72e9bc85842d8879d7dcf2a3d2defd79a5a0c3c3d974ddfbbbc6e95bf910b",
"service": "service-nginx-web", "attributes": {"name": "nginx-web", "image":
"10.0.0.102/example/nginx-centos7-base:1.6.1"}}
{"time": "2020-02-04T15:48:23.979421", "type": "container", "action": "start",
"id": "19d72e9bc85842d8879d7dcf2a3d2defd79a5a0c3c3d974ddfbbbc6e95bf910b",
"service": "service-nginx-web", "attributes": {"name": "nginx-web", "image":
"10.0.0.102/example/nginx-centos7-base:1.6.1"}}
{"time": "2020-02-04T15:48:23.979468", "type": "container", "action": "restart",
"id": "19d72e9bc85842d8879d7dcf2a3d2defd79a5a0c3c3d974ddfbbbc6e95bf910b",
"service": "service-nginx-web", "attributes": {"name": "nginx-web", "image":
"10.0.0.102/example/nginx-centos7-base:1.6.1"}}

9.3.9 暂停和恢复

[root@ubuntu1804 docker-compose]#docker-compose pause
Pausing nginx-web ... done
[root@ubuntu1804 docker-compose]#docker-compose ps
Name       Command      State           Ports      
  
--------------------------------------------------------------------------------------
nginx-web  /apps/nginx/sbin/nginx  Paused  0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
[root@ubuntu1804 docker-compose]#curl -m 1 http://127.0.0.1/app/
curl: (28) Operation timed out after 1002 milliseconds with 0 bytes received
[root@ubuntu1804 docker-compose]#docker-compose unpause
Unpausing nginx-web ... done
[root@ubuntu1804 docker-compose]#docker-compose ps
Name       Command      State          Ports      
  
-------------------------------------------------------------------------------------
nginx-web  /apps/nginx/sbin/nginx  Up    0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
[root@ubuntu1804 docker-compose]#curl -m 1 http://127.0.0.1/app/
Test Page in app

9.3.10 指定同时启动容器的数量

[root@ubuntu1804 docker-compose]#vim docker-compose.yml
[root@ubuntu1804 docker-compose]#cat docker-compose.yml
service-nginx-web:
image: 10.0.0.102/example/nginx-centos7-base:1.6.1 
# container_name: nginx-web     #同时启动多个同一镜像的容器,不要指定容器名称,否则会冲突
expose:
 - 80
 - 443
# ports:                        #同时启动多个同一镜像的容器,不要指定端口号,否则会冲突
#  - "80:80"
#  - "443:443"

#再加一个service
service-tomcat:
    image: 10.0.0.102/example/tomcat-base:v8.5.50

[root@ubuntu1804 docker-compose]#docker-compose ps
Name  Command  State  Ports
------------------------------
[root@ubuntu1804 docker-compose]#docker-compose up -d --scale service-nginx-web=2
Creating docker-compose_service-tomcat_1  ... done
Creating docker-compose_service-nginx-web_1 ... done
Creating docker-compose_service-nginx-web_2 ... done
[root@ubuntu1804 docker-compose]#docker-compose ps
       Name             Command      State    Ports  
--------------------------------------------------------------------------------------
docker-compose_service-nginx-web_1  /apps/nginx/sbin/nginx  Up    443/tcp,80/tcp
docker-compose_service-nginx-web_2  /apps/nginx/sbin/nginx  Up    443/tcp,80/tcp
docker-compose_service-tomcat_1   /bin/bash        Exit 0      
  
[root@ubuntu1804 docker-compose]#docker-compose up -d --scale service-nginx-web=3 --scale service-tomcat=2
Starting docker-compose_service-tomcat_1  ... done
Starting docker-compose_service-nginx-web_1 ... done
Starting docker-compose_service-nginx-web_2 ... done
Creating docker-compose_service-nginx-web_3 ... done
Creating docker-compose_service-tomcat_2  ... done

[root@ubuntu1804 docker-compose]#docker-compose ps
       Name             Command      State    Ports  
--------------------------------------------------------------------------------------
docker-compose_service-nginx-web_1  /apps/nginx/sbin/nginx  Up    443/tcp,80/tcp
docker-compose_service-nginx-web_2  /apps/nginx/sbin/nginx  Up    443/tcp,80/tcp
docker-compose_service-nginx-web_3  /apps/nginx/sbin/nginx  Up    443/tcp,80/tcp
docker-compose_service-tomcat_1   /bin/bash        Exit 0      
  
docker-compose_service-tomcat_2   /bin/bash        Exit 0      
  
[root@ubuntu1804 docker-compose]#docker-compose up -d
Stopping and removing docker-compose_service-nginx-web_2 ... done
Stopping and removing docker-compose_service-nginx-web_3 ... done
Stopping and removing docker-compose_service-tomcat_2  ... done
Starting docker-compose_service-tomcat_1         ... done
Starting docker-compose_service-nginx-web_1       ... done

[root@ubuntu1804 docker-compose]#docker-compose ps
       Name             Command      State    
Ports  
--------------------------------------------------------------------------------------
docker-compose_service-nginx-web_1  /apps/nginx/sbin/nginx  Up    443/tcp,80/tcp
docker-compose_service-tomcat_1   /bin/bash        Exit 0

9.4 从docker compose启动多个容器

9.4.1 编辑docker-compose文件并使用数据卷

注意: 同一个文件 ,数据卷的优先级比镜像内的文件优先级高

[root@ubuntu1804 docker-compose]#vim docker-compose.yml
[root@ubuntu1804 docker-compose]#cat docker-compose.yml
service-nginx-web:
image: 10.0.0.102/example/nginx-centos7-base:1.6.1
container_name: nginx-web
volumes:
  - /data/nginx:/apps/nginx/html/#指定数据卷,将宿主机/data/nginx挂载到容器/apps/nginx/html
expose:
 - 80
 - 443
ports:
  - "80:80"
  - "443:443"
service-tomcat-app1:
image: 10.0.0.102/example/tomcat-web:app1
container_name: tomcat-app1
expose:
  - 8080
ports:
  - "8081:8080"
service-tomcat-app2:
image: 10.0.0.102/example/tomcat-web:app2
container_name: tomcat-app2
expose:
  - 8080
ports:
  - "8082:8080"
 
#在宿主机准备nginx测试页面文件
[root@ubuntu1804 docker-compose]#mkdir /data/nginx
[root@ubuntu1804 docker-compose]#echo Docker compose test page >/data/nginx/index.html

9.4.2 启动容器并验证结果

[root@ubuntu1804 docker-compose]#docker-compose up -d
Pulling service-tomcat-app1 (10.0.0.102/example/tomcat-web:app1)...
app1: Pulling from example/tomcat-web
f34b00c7da20: Already exists
544476d462f7: Already exists
39345915aa1b: Already exists
4b792f2bae38: Already exists
4439447a3522: Already exists
fe34d2ec1dd0: Already exists
b8487ca03126: Already exists
5a475b7d8b1a: Already exists
df8703d3d2dd: Already exists
f0da1ffa7aa7: Pull complete
80fd4c70e670: Pull complete
c2a0247d7bfa: Pull complete
b0977ed809cd: Pull complete
Digest: sha256:e0aba904df6095ea04c594d6906101f8e5f4a6ceb0a8f9b24432c47698d0caa8
Status: Downloaded newer image for 10.0.0.102/example/tomcat-web:app1
Pulling service-tomcat-app2 (10.0.0.102/example/tomcat-web:app2)...
app2: Pulling from example/tomcat-web
f34b00c7da20: Already exists
544476d462f7: Already exists
39345915aa1b: Already exists
4b792f2bae38: Already exists
4439447a3522: Already exists
fe34d2ec1dd0: Already exists
b8487ca03126: Already exists
5a475b7d8b1a: Already exists
df8703d3d2dd: Already exists
f0da1ffa7aa7: Already exists
80fd4c70e670: Already exists
1a55cb76a801: Pull complete
565ab795f82a: Pull complete
Digest: sha256:c4d6f166c3933f6c1ba59c84ea0518ed653af25f28b87981c242b0deff4209bb
Status: Downloaded newer image for 10.0.0.102/example/tomcat-web:app2
Creating tomcat-app1 ... done
Creating tomcat-app2 ... done
Creating nginx-web  ... done
[root@ubuntu1804 docker-compose]#docker-compose ps
 Name         Command        State          Ports 
       
-----------------------------------------------------------------------------------------------
nginx-web   /apps/nginx/sbin/nginx      Up    0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
tomcat-app1  /apps/tomcat/bin/run_tomcat.sh  Up    8009/tcp, 0.0.0.0:8081->8080/tcp    
tomcat-app2  /apps/tomcat/bin/run_tomcat.sh  Up    8009/tcp, 0.0.0.0:8082->8080/tcp    
[root@ubuntu1804 docker-compose]#curl http://127.0.0.1/
Docker compose test page
[root@ubuntu1804 docker-compose]#curl http://127.0.0.1:8081/app/
Tomcat Page in app1
[root@ubuntu1804 docker-compose]#curl http://127.0.0.1:8082/app/
Tomcat Page in app2

10、docker 的资源限制(已经被k8s代替,此处简单了解即可)

10.1 docker 资源限制

10.1.1 容器资源限制介绍

官方文档: https://docs.docker.com/config/containers/resource_constraints/

默认情况下,容器没有资源的使用限制,可以使用主机内核调度程序允许的尽可能多的资源

Docker 提供了控制容器使用资源的方法,可以限制容器使用多少内存或 CPU等, 在docker run 命令的运行时配置标志实现资源限制功能。

其中许多功能都要求宿主机的内核支持,要检查是否支持这些功能,可以使用docker info 命令 ,如果内核中的某项特性可能会在输出结尾处看到警告, 如下所示:

WARNING: No swap limit support

可通过修改内核参数消除以上警告

官方文档: https://docs.docker.com/install/linux/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities

10.1.2 OOM (Out of Memory Exception)

对于 Linux 主机,如果没有足够的内存来执行其他重要的系统任务,将会抛出OOM (Out of Memory Exception,内存溢出、内存泄漏、内存异常 ),随后系统会开始杀死进程以释放内存, 凡是运行在宿主机的进程都有可能被 kill ,包括 Dockerd和其它的应用程序, 如果重要的系统进程被 Kill,会导致和该进程相关的服务全部宕机。通常越消耗内存比较大的应用越容易被kill,比如: MySQL数据库,Java程序等

产生 OOM 异常时, Dockerd尝试通过调整 Docker 守护程序上的 OOM 优先级来减轻这些风险,以便它比系统上的其他进程更不可能被杀死但是容器 的 OOM 优先级未调整, 这使得单个容器被杀死的可能性比 Docker守护程序或其他系统进程被杀死的可能性更大,不推荐通过在守护程序或容器上手动设置– oom -score-adj为极端负数,或通过在容器上设置 — oom-kill-disable来绕过这些安全措施

OOM 优先级机制:
linux会为每个进程算一个分数,最终将分数最高的kill

/proc/PID/oom_score_adj
#范围为 -1000 到 1000,值越高容易被宿主机 kill掉,如果将该值设置为 -1000 ,则进程永远不会被宿主机 kernel kill

/proc/PID/oom_adj
#范围为 -17 到+15 ,取值越高越容易被干掉,如果是 -17 , 则表示不能被 kill ,该设置参数的存在是为了和旧版本的 Linux 内核兼容。

/proc/PID/oom_score
#这个值是系统综合进程的内存消耗量、 CPU 时间 (utime + 、存活时间 (uptime - start time)和 oom_adj 计算出的进程得分 ,消耗内存越多得分越高,容易被宿主机 kernel 强制杀死

#按内存排序
[root@ubuntu1804 ~]#top
top - 20:15:38 up  5:53,  3 users, load average: 0.00, 0.00, 0.00
Tasks: 191 total,  1 running, 116 sleeping,  0 stopped,  0 zombie
%Cpu(s):  0.0 us,  0.3 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :  985104 total,  310592 free,  448296 used,  226216 buff/cache
KiB Swap:  1951740 total,  1892860 free,   58880 used.  384680 avail Mem
PID USER   PR NI  VIRT  RES  SHR S %CPU %MEM   TIME+ COMMAND
19674 2019    20  0 2241656  94684  12452 S  0.0  9.6  0:16.05 java    
                         
19675 2019    20  0 2235512  74816  12440 S  0.0  7.6  0:14.89 java    
                         
19860 99     20  0  183212  67748   960 S  0.0  6.9  0:01.15 haproxy  
                          
 4969 root    20  0  937880  49352  12612 S  0.0  5.0  0:46.07 dockerd  
                          
 2981 root    20  0  793072  13552  1808 S  0.0  1.4  0:13.78 containerd 
                         
 500 root    19  -1  78560  7552  7112 S  0.0  0.8  0:01.45 systemd-journal  

 798 root    20  0  170416  6604  4084 S  0.0  0.7  0:00.77 networkd-dispat  

  1 root    20  0  78036  6200  4416 S  0.0  0.6  0:05.39 systemd  
                          
 1011 root    20  0  24548  5496  3012 S  0.0  0.6  0:01.62 bash    
                         
 852 root    10 -10  25880  5264  4036 S  0.0  0.5  0:00.00 iscsid   
                         
 815 root    20  0  548292  4624  1224 S  0.0  0.5  0:01.89 snapd   
                          
19586 root    20  0  109104  4532  3768 S  0.0  0.5  0:00.29 containerd-shim  

19779 root    20  0  405532  4224  2828 S  0.0  0.4  0:00.01 docker-proxy
                         
19784 root    20  0  107696  4204  3652 S  0.0  0.4  0:00.29 containerd-shim    

19424 root    20  0  109104  4084  3416 S  0.0  0.4  0:00.27 containerd-shim       

20064 root    20  0  44076  4036  3360 R  0.7  0.4  0:00.20 top    
                          
19768 root    20  0  405532  4024  2644 S  0.0  0.4  0:00.01 docker-proxy
                         
19423 root    20  0  109104  3792  3064 S  0.0  0.4  0:00.31 containerd-shim   

 490 root    20  0  193112  3316  2864 S  0.0  0.3  0:26.20 vmtoolsd  
                         
 7108 root    20  0  105688  3204  2504 S  0.0  0.3  0:00.11 sshd
 
 
 [root@ubuntu1804 ~]#cat /proc/19674/oom_adj
0
[root@ubuntu1804 ~]#cat /proc/19674/oom_score
32
[root@ubuntu1804 ~]#cat /proc/19674/oom_score_adj
0
[root@ubuntu1804 ~]#cat /proc/7108/oom_adj
0
[root@ubuntu1804 ~]#cat /proc/7108/oom_score
1
[root@ubuntu1804 ~]#cat /proc/7108/oom_score_adj
0

#docker服务进程的OOM默认值
[root@ubuntu1804 ~]#cat /proc/`pidof dockerd`/oom_adj
-8
[root@ubuntu1804 ~]#cat /proc/`pidof dockerd`/oom_score
0
[root@ubuntu1804 ~]#cat /proc/`pidof dockerd`/oom_score_adj
-500

10.2 容器的内存限制

Docker 可以强制执行硬性内存限制,即只允许容器使用给定的内存大小。
Docker 也可以执行非硬性内存限制,即容器可以使用尽可能多的内存,除非内核检测到主机上的内存不够用了

10.2.1 内存相关选项

官文文档: https://docs.docker.com/config/containers/resource_constraints/
以下设置大部分的选项取正整数,跟着一个后缀 b , k , m , g ,,表示字节,千字节,兆字节或千兆字节

Option Description
-m or --memory= 容器可以使用的最大物理内存量,硬限制,此选项最小允许值为4m (4 MB),此项较常用
--memory-swap* 允许此容器交换到磁盘的内存量,必须先用-m 对内存限制才可以使用,详细说明如下
--memory-swappiness 设置容器使用交换分区的倾向性,值越高表示越倾向于使用swap分区,范围为0-100,0为能不用就不用,100为能用就用
--memory-reservation 允许指定小于 –memory 的软限制 ,当 Docker 检测到主机上的争用或内存不足时会激活该限制,如果使– memory-reservation,则必须将其设置为低于 –memory 才能使其优先生效。 因为它是软限制,所以不能保证容器不超过限制
--kernel-memory 容器可以使用的最大内核内存量,最小为 4m,由于内核内存与用户空间内存隔离,因此无法与用户空间内存直接交换,因此内核内存不足的容器可能会阻塞宿主机资源,这会对主机和其他容器或者其他服务进程产生影响,因此不建议设置内核内存大小
--oom-kill-disable 默认情况下,如果发生内存不足(OOM)错误,则内核将终止容器中的进程。要更改此行为,请使用该 –oom-kill-disable 选项。仅在设置了该 -m/–memory 选项的容器上禁用OOM。如果 -m 未设置该标志,则主机可能会用完内存,内核可能需要终止主机系统的进程以释放内存

10.2.2 swap限制

--memory-swap #只有在设置了 --memory 后才会有意义。使用 Swap,可以让容器将超出限制部分的内存置换到磁盘上,WARNING: 经常将内存交换到磁盘的应用程序会降低性能

不同的–memory-swap 设置会产生不同的效果:

–memory-swap –memory 功能
正数S 正数M 容器可用内存总空间为S,其中ram为M,swap为 S-M,若S=M,则无可用swap资源
0 正数M 相当于未设置swap(unset)
unset 正数M 若主机(Docker Host) 启用于swap , 则容器的可用swap 为2*M
-1 正数M 若主机(Docker Host)启用了swap ,则容器可使用最大至主机上所有swap空间
-memory-swap #值为正数, 那么--memory 和--memory-swap 都必须要设置,--memory-swap 表示你能使用的内存和 swap 分区大小的总和,例如:  --memory=300m, --memory-swap=1g, 那么该容器能够使用 300m 物理内存和 700m swap,即--memory 是实际物理内存大小值不变,而 swap 的实际大小计算方式为(--memory-swap)-(--memory)=容器可用 swap

--memory-swap #如果设置为 0,则忽略该设置,并将该值视为未设置,即未设置交换分区

--memory-swap #如果等于--memory 的值,并且--memory 设置为正整数,容器无权访问 swap

-memory-swap #如果未设置,如果宿主机开启了 swap,则实际容器的swap 值最大为 2x( --memory),即两倍于物理内存大小,例如,如果--memory="300m"与--memory-swap没有设置,该容器可以使用300m总的内存和600m交撒空间,但是并不准确(在容器中使用free 命令所看到的 swap 空间并不精确,毕竟每个容器都可以看到具体大小,宿主机的 swap 是有上限的,而且不是所有容器看到的累计大小)

--memory-swap #如果设置为-1,如果宿主机开启了 swap,则容器可以使用主机上 swap 的最大空间

注意: 在容器中执行free命令看到的是宿主机的内存和swap使用,而非容器自身的swap使用情况

在容器中查看内存

[root@ubuntu1804 ~]#free
      total    used    free   shared buff/cache  available
Mem:     3049484    278484   1352932    10384   1418068   2598932
Swap:    1951740      0   1951740
[root@ubuntu1804 ~]#docker run -it --rm -m 2G centos:centos7.7.1908 bash
[root@f5d387b5022f /]# free
      total    used    free   shared buff/cache  available
Mem:     3049484    310312   1320884    10544   1418288   2566872
Swap:    1951740      0   1951740

10.2.3 stress-ng 压力测试工具

stress-ng是一个压力测试工具,可以通过软件仓库进行安装,也提供了docker版本的容器

image-20211111152753302

具体使用可以下载后查看帮助

假如一个容器未做内存使用限制,则该容器可以利用到系统内存最大空间,默认创建的容器没有做内存资源限制。

10.3 容器的CPU限制

10.3.1 容器的CPU限制介绍

官方文档说明: https://docs.docker.com/config/containers/resource_constraints/
一个宿主机,有几十个核心的CPU,但是宿主机上可以同时运行成百上千个不同的进程用以处理不同的任务,多进程共用一个 CPU 的核心为可压缩资源,即一个核心的 CPU 可以通过调度而运行多个进程,但是同一个单位时间内只能有一个进程在 CPU 上运行,那么这么多的进程怎么在 CPU 上执行和调度的呢?

Linux kernel 进程的调度基于CFS(Completely Fair Scheduler),完全公平调度

服务器资源密集型

  • CPU 密集型的场景: 优先级越低越好,计算密集型任务的特点是要进行大量的计算,消耗CPU 资源,比如计算圆周率、数据处理、对视频进行高清解码等等,全靠CPU 的运算能力。
  • IO 密集型的场景: 优先级值高点,涉及到网络、磁盘IO 的任务都是IO 密集型任务,这类任务的特点是 CPU 消耗很少,任务的大部分时间都在等待 IO 操作完成(因为 IO 的速度远远低于 CPU 和内存的速度),比如 Web 应用,高并发,数据量大的动态网站来说,数据库应该为IO 密集型

CFS原理
cfs定义了进程调度的新模型,它给cfs_rq(cfs的run queue)中的每一个进程安排一个虚拟时钟vruntime。如果一个进程得以执行,随着时间的增长,其vruntime将不断增大。没有得到执行的进程vruntime不变, 而调度器总是选择vruntime跑得最慢的那个进程来执行。这就是所谓的“完全公平”。为了区别不同优先级的进程,优先级高的进程vruntime增长得慢,以至于它可能得到更多的运行机会。
CFS的意义在于, 在一个混杂着大量计算型进程和IO交互进程的系统中,CFS调度器相对其它调度器在对待IO交互进程要更加友善和公平。

10.3.2 配置默认的CFS调度程序

默认情况下,每个容器对主机的CPU周期的访问都是不受限制的。可以设置各种约束,以限制给定容器对主机CPU周期的访问。大多数用户使用并配置 默认的CFS调度程序。在Docker 1.13及更高版本中,还可以配置 realtime scheduler。

CFS是用于常规Linux进程的Linux内核CPU调度程序。通过几个运行时标志,可以配置对容器拥有的CPU资源的访问量。使用这些设置时,Docker会在主机上修改容器cgroup的设置。

期权 描述
--cpus=<value> 指定容器可以使用多少可用CPU资源。例如,如果主机有两个CPU,并且您设置了--cpus="1.5",容器最多只能得到一个半CPU的保证。这相当于设置--cpu-period="100000"--cpu-quota="150000".
--cpu-period=<value> 指定cpu cfs调度程序周期,该周期与--cpu-quota。默认为100000微秒(100毫秒)。大多数用户不会将此更改为默认设置。对于大多数用例来说,--cpus是更方便的选择。
--cpu-quota=<value> 将CPU CFS配额强加于容器。每微秒数--cpu-period容器被限制在节流前。因此,作为有效的上限。对于大多数用例来说,--cpus是更方便的选择。
--cpuset-cpus 限制容器可以使用的特定CPU或核心。如果您有多个CPU,容器可以使用逗号分隔的列表或连字符分隔的CPU范围。第一个CPU编号为0。一个有效值可能是0-3(使用第一、第二、第三和第四CPU)或1,3(使用第二和第四CPU)。
--cpu-shares 将此标志设置为大于或小于缺省值1024的值,以增加或减少容器的重量,并使其能够访问主机CPU周期的不同比例。只有当CPU周期受到限制时,才会强制执行。当有足够的CPU周期可用时,所有容器都会使用所需的CPU。这样,这是一个软限制。--cpu-shares不会阻止集装箱在群集模式下调度。它为可用的CPU周期确定容器CPU资源的优先级。它不保证或保留任何特定的CPU访问权限。

11、可视化图形工具Portainer

11.1 Portainer介绍

Portainer是一个可视化的容器镜像的图形管理工具,利用Portainer可以轻松构建,管理和维护Docker环境。 而且完全免费,基于容器化的安装方式,方便高效部署。
官方站点: https://www.portainer.io/

11.2 安装 Portainer

官方安装说明: https://www.portainer.io/installation/

[root@ubuntu1804 ~]#docker search portainer |head -n 3
NAME         DESCRIPTION           STARS     OFFICIAL AUTOMATED
portainer/portainer  Making Docker management easy. https://porta…  1569   
   
portainer/agent    An agent used to manage all the resources in…  54       0 

#portainer项目废弃
[root@ubuntu1804 ~]#docker pull portainer/portainer

#portainer-ce项目代替portainer
[root@ubuntu1804 ~]#docker pull portainer/portainer-ce
[root@ubuntu1804 ~]#docker volume create portainer_data
portainer_data
[root@ubuntu1804 ~]#docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce
20db26b67b791648c2ef6aee444a5226a9c897ebcf0160050e722dbf4a4906e3

[root@ubuntu1804 ~]#docker ps
CONTAINER ID    IMAGE                COMMAND          CREATED         STATUS           PORTS                                               NAMES
20db26b67b79    portainer/portainer  "/portainer"     5 seconds ago   Up 4 seconds     0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp  portainer

11.3 登录和使用Portainer

用浏览器访问: http://localhost:9000 可以看到以下界面
设置admin用户密码,需要输入两次超过8个字符的相同的密码

11.4 查看主机信息

11.5 创建portainer用户

普通用户权限较小,无法管理容器

可以考虑创建用户的时候勾选administrator选项

11.6 管理镜像

11.7 管理容器

docker 命令总结

attach                  # 当前shell下attach连接指定运行镜像
build                   # 通过dockerfile定制镜像
commit                  # 提交当前容器为新的镜像
cp                      # 从容器中拷贝指定文件或者目录到宿主机中
create                  # 创建一个新的容器,同run 但不启动容器
diff                    # 查看docker 容器变化
events                  # 从docker 服务获取容器实时事件
exec                    # 在已存在的容器上运行命令
export                  # 导出容器的内容作为一个 tar 归档文件[对应import]
history                 # 展示一个镜像形成历史
images                  # 列出系统当前镜像
import                  # 从tar包中的内容创建一个新的文件系统映像[对应export]
info                    # 显示系统相关信息
inspect                 # 查看容器详细信息
kill                    # kill 指定容器
load                    # 从一个tar 包中加载一个镜像[对应save]
login                   # 注册或者登陆一个docker源服务器
logout                  # 从当前docker registry退出
logs                    # 输出当前容器日志信息
port                    # 查看映射端口对应的容器内部源端口
pause                   # 暂停容器
ps                      # 列出容器列表
pull                    # 从docker镜像源服务器拉取指定镜像或者库镜像
push                    # 推送指定镜像或者库镜像至docker源服务器
restart                 # 重启运行的容器
rm                      # 移除一个或者多个容器
rmi                     # 移除一个或多个镜像[无容器使用该镜像才可删除,否则需要删除相关容器才可继续或 -f 强制删除]
run                     # 创建一个新的容器并运行一个命令
save                    # 保存一个镜像为一个tar包[对应load]
search                  # 在docker hub 中搜索镜像
start                   # 启动容器
stop                    # 停止容器
tag                     # 给源中镜像打标签
top                     # 查看容器中运行的进程信息
unpause                 # 取消暂停容器
version                 # 查看docker版本号
wait                    # 截取容器停止时的退出状态值