Docker再学习笔记

Docker

由来

  • Docker是dotCloud公司开源的一款基于Go语言实现的开源容器项目。dotCloud公司是2010年新成立的一家公司,主要基于PaaS(Platform as a Service,平台即服务)平台为开发者提供服务。在PaaS平台下,所有的服务环境已经预先配置好了,开发者只需要选择服务类型、上传代码就可对外服务,不需要花费大量的时间搭建服务和配置环境。dotCloud的PaaS平台已经做得足够好了,它支持几乎所有主流的Web编程语言和数据库,可以让开发者随心所欲地选择自己需要的编程语言、数据库和编程框架,而且它的设置非常简单,每次编码后只需要运行一条命令就能把整个网站部署上去;并且利用多层次平台的概念,理论上,它的应用可以运行在各种类型的云服务上。两三年下来,虽然dotCloud也在业界获得不错的口碑,但由于整个PaaS市场还处于培育阶段,dotCloud公司表现得不温不火,没有出现爆发性的增长。

  • Docker最先主要运行在Ubuntu系统下,后来支持REHL/Centos,所有的云计算大公司,如Azure、Google和亚马逊等都在支持Docker技术,这实际上也让Docker成为云计算领域的一大重要组成部分。

  • Docker模糊了IaaS与PaaS之间的界限,为云计算的服务形式带来了无限的可能,Docker带着它的容器理念破而后立,是云计算运动中一项了不起的创举。

    https://www.ruanyifeng.com/blog/2017/07/iaas-paas-saas.html

概念及优势

  • Docker,目前的定义是一个开源的容器引擎,可以方便地对容器进行管理。其对镜像的打包封装,以及引入的Docker Registry对镜像的统一管理,构建了方便快捷的“Build,Ship and Run”流程,它可以统一整个开发、测试和部署的环境和流程,极大地减少运维成本
  • Docker容器运行速度很快,可以在秒级实现启动和停止,比传统虚拟机要快很多。Docker解决的核心问题是利用容器来实现类似虚拟机的功能,从而利用更少的硬件资源给用户提供更多的计算资源。Docker容器除了运行其中的应用之外,基本不消耗额外的系统资源,在保证应用性能的同时,减小了系统开销,这使得一台主机上同时运行数千个Docker容器成为可能。
  • 一致的运行环境
  • 资源、网络、库等都是隔离的,不会出现依赖问题
  • 提供各种标准化操作,非常适合自动化
  • 轻量级,能够快速启动和迁移
工作流程

安装

  • Centos系统安装docker
[root@localhost ~]# curl -fsSL https://get.docker.com | bash -s docker --mirror aliyun
  • 安装成功后默认不启动docker服务,手动启动
[root@localhost ~]# systemctl start docker
  • 将docker服务加入开机启动项
[root@localhost ~]# systemctl enable docker
  • 查看版本号
[root@localhost ~]# docker versionClient: Docker Engine - Community Version:           20.10.7 API version:       1.41 Go version:        go1.13.15 Git commit:        f0df350 Built:             Wed Jun  2 11:58:10 2021 OS/Arch:           linux/amd64 Context:           default Experimental:      trueServer: Docker Engine - Community Engine:  Version:          20.10.7  API version:      1.41 (minimum version 1.12)  Go version:       go1.13.15  Git commit:       b0f5bc3  Built:            Wed Jun  2 11:56:35 2021  OS/Arch:          linux/amd64  Experimental:     false containerd:  Version:          1.4.6  GitCommit:        d71fcd7d8303cbf684402823e425e9dd2e99285d runc:  Version:          1.0.0-rc95  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 docker-init:  Version:          0.19.0  GitCommit:        de40ad0

基本组成

  • Docker客户端:最常用的Docker客户端是docker命令。通过docker我们可以方便地在Host上构建和运行容器。

  • Docker服务器:Docker daemon运行在Docker host上,负责创建、运行、监控容器,构建、存储镜像。默认配置下,Docker daemon只能响应来自本地Host的客户端请求。如果要允许远程客户端请求,需要在配置文件中打开TCP监听。

    • 编辑配置文件 /etc/systemd/system/multi-user.target.wants/docker.service,在环境变量ExecStart后面添加 -H tcp://0.0.0.0,允许来自任意IP的客户端连接
    • 重启Docker daemon
    systemctl daemon-reloadsystemctl restart docker.service
    
    • docker服务器IP为192.168.9.140,客户端在命令行里加上-H参数,在另外一台机器上即可与远程服务器通信
    docker -H 192.168.9.140 info
    
    [root@xdja ~]# docker -H 192.168.9.140 infoContainers: 3 Running: 3 Paused: 0 Stopped: 0Images: 3Server Version: 18.09.7Storage Driver: devicemapper Pool Name: docker-8:3-67364689-pool Pool Blocksize: 65.54kB Base Device Size: 10.74GB Backing Filesystem: xfs Udev Sync Supported: true Data file: /dev/loop0 Metadata file: /dev/loop1 Data loop file: /var/lib/docker/devicemapper/devicemapper/data Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
    
  • 镜像:Docker的镜像是创建容器的基础,类似虚拟机的快照,可以理解为一个面向Docker容器引擎的只读模板。例如,一个镜像可以是一个完整的CentOS操作系统环境,称为一个CentOS镜像;也可以是一个安装了MySQL的应用程序,称为一个MySQL镜像,等等。

  • 容器:Docker的容器是从镜像创建的运行实例,它可以被启动、停止和删除。每一个容器都是相互隔离、互不可见的,以保证平台的安全性。可以将容器看作是一个简易版的Linux环境,Docker利用容器来运行和隔离应用。

  • 仓库:Docker仓库是用来集中保存镜像的地方。当开发人员创建了自己的镜像之后,可以使用push命令将它上传到公有(Public)仓库或者私有(Private)仓库。下次要在另外一台机器上使用这个镜像时,只需从仓库获取即可。

    官方Docker仓库地址为https://hub.docker.com

Docker架构图
  • Docker 主机(Host):一个物理或者虚拟的机器用于执行 Docker 守护进程和容器。

镜像构建:即创建一个镜像,它包含安装运行所需的环境、程序代码等,这个创建过程就是使用 dockerfile 来完成的。

容器启动:容器最终运行起来是通过拉取构建好的镜像,通过一系列运行指令(如端口映射、外部数据挂载、环境变量等)来启动服务的。针对单个容器,这可以通过 docker run 来运行。

而如果涉及多个容器的运行(如服务编排)就可以通过 docker-compose 来实现,它可以轻松的将多个容器作为 service 来运行(当然也可仅运行其中的某个),并且提供了 scale (服务扩容) 的功能。

通过Dockerfile构建镜像

  • 拉取centos镜像
[root@localhost docker]# docker pull centos
  • 上传jdk及tomcat安装包
[root@localhost docker]# lltotal 151856-rw-rw-rw- 1 root root  10559131 Jun 21 17:45 apache-tomcat-8.5.68.tar.gz-rw-r--r-- 1 root root       696 Jun 22 09:32 Dockerfile-rw-rw-rw- 1 root root 144935989 Jun 22 09:15 jdk-8u291-linux-x64.tar.gz
  • 构建Dockerfile文件
[root@localhost wch]# pwd/home/wch[root@localhost wch]# mkdir docker[root@localhost docker]# touch Dockerfile
  • 键入以下内容
#基础镜像FROM centos:latest#创建者信息MAINTAINER wch#添加tomcat和jdk到镜像中#我的jdk 和 tomcat压缩包在当前目录下,ADD命令会自动解压ADD jdk-8u291-linux-x64.tar.gz /usr/local/ADD apache-tomcat-8.5.68.tar.gz /usr/local/#设置环境变量ENV JAVA_HOME /usr/local/jdk1.8.0_291/ENV PATH $JAVA_HOME/bin:$PATHENV CLASSPATH .:$JAVA_HOME/lib#配置启动文件的权限RUN chmod +x /usr/local/apache-tomcat-8.5.68/bin/*.sh#指定于外界交互的端口EXPOSE 8080#定义在容器启动之后的运行程序ENTRYPOINT /usr/local/apache-tomcat-8.5.68/bin/startup.sh && /bin/bash && tail -f /usr/local/apache-tomcat-8.5.68/logs/catalina.out
  • 构建镜像,成功后返回镜像ID
[root@localhost docker]# docker build -f /home/wch/docker/Dockerfile -t wch/tomcat .Sending build context to Docker daemon  155.5MBStep 1/10 : FROM centos:latest ---> 300e315adb2fStep 2/10 : MAINTAINER wch ---> Running in c9ff9c1277b4Removing intermediate container c9ff9c1277b4 ---> 3b8b3ffc8af3Step 3/10 : ADD jdk-8u291-linux-x64.tar.gz /usr/local/ ---> 988571412bacStep 4/10 : ADD apache-tomcat-8.5.68.tar.gz /usr/local/ ---> f160e9207148Step 5/10 : ENV JAVA_HOME /usr/local/jdk1.8.0_291/ ---> Running in 4574503f1307Removing intermediate container 4574503f1307 ---> af37b9368f59Step 6/10 : ENV PATH $JAVA_HOME/bin:$PATH ---> Running in 30521e475681Removing intermediate container 30521e475681 ---> 98760e798091Step 7/10 : ENV CLASSPATH .:$JAVA_HOME/lib ---> Running in 6efa1040eb62Removing intermediate container 6efa1040eb62 ---> e50226013e04Step 8/10 : RUN chmod +x /usr/local/apache-tomcat-8.5.68/bin/*.sh ---> Running in 733a8f068adcRemoving intermediate container 733a8f068adc ---> 60ffde451605Step 9/10 : EXPOSE 8080 ---> Running in 024e2e19af04Removing intermediate container 024e2e19af04 ---> 52afaea4fc62Step 10/10 : ENTRYPOINT /usr/local/apache-tomcat-8.5.68/bin/startup.sh && /bin/bash && tail -f /usr/local/apache-tomcat-8.5.68/logs/catalina.out ---> Running in 69e6fea9f1b7Removing intermediate container 69e6fea9f1b7 ---> 9b8179770e78Successfully built 9b8179770e78Successfully tagged wch/tomcat:latest

命令末尾的.指明build context为当前目录。Docker默认会从build context中查找Dockerfile文件,我们也可以通过-f参数指定Dockerfile的位置。

docker build -f /home/wch/docker/Dockerfile -t wch/tomcat .

或者以下

cd /home/wch/docker

docker build -t wch/tomcat .

  • 查看构建成功的镜像
[root@localhost docker]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        25 minutes ago      584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB
  • 通过构建好的镜像,启动容器
docker run -d -p 8010:8080 wch/tomcatb43861a53e3206650d57107c869f538cc3384630957fcb8bff1cc40bb92610e0
  • 浏览器访问
image
  • 查看容器
[root@localhost ~]# docker exec -it b43861a53e32 /bin/bash[root@b43861a53e32 /]# cd /usr/local/[root@b43861a53e32 local]# lsapache-tomcat-8.5.68  bin  etc  games  include  jdk1.8.0_291  lib  lib64  libexec  sbin  share  src

RUN vs CMD vs ENTRYPOINT

  • RUN:执行命令并创建新的镜像层,RUN经常用于安装软件包。

  • CMD:设置容器启动后默认执行的命令及其参数,但CMD能够被docker run后面跟的命令行参数替换。

    • 如果docker run指定了其他命令,CMD指定的默认命令将被忽略
    • 如果Dockerfile中有多个CMD指令,只有最后一个CMD有效
  • ENTRYPOINT:配置容器启动时运行的命令。

    • ENTRYPOINT不会被忽略,一定会被执行,即使运行docker run时指定了其他命令,CMD可为ENTRYPOINT提供额外的默认参数,同时可利用docker run命令行替换默认参数。
    • ENTRYPOINT的Shell格式会忽略任何CMD或docker run提供的参数
  • Shell格式,当指令执行时,shell格式底层会调用 /bin/sh -c [command]

    • RUN apt-get install python3
    • CMD echo “hello world”
    • ENTRYPOINT echo “hello world”
  • Exec格式,当指令执行时,会直接调用 [command],不会被shell解析。

    • RUN [“apt-get”,”install”,”python3”]
    • CMD [“/bin/echo”,“hello world”]
    • ENTRYPOINT [“/bin/echo”,“hello world”]
    • ENTRYPOINT [“/bin/echo”,“hello”] CMD [“ world”]
    • ENV name Cloud Man ENTRYPOINT [“/bin/sh”,”–c”,“echo hello,$name”]

CMD和ENTRYPOINT推荐使用Exec格式,因为指令可读性更强,更容易理解。RUN则两种格式都可以。

分发镜像

使用公共Registry

  • Docker Hub,首先通过web页面注册一个账户
[root@localhost ~]# docker login -u wholegale39Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json.Configure a credential helper to remove this warning. Seehttps://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded
  • 查看本地镜像
[root@localhost ~]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        5 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB
  • 修改镜像名称
[root@localhost ~]# docker tag wch/tomcat wholegale39/tomcat
[root@localhost ~]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        5 hours ago         584MBwholegale39/tomcat                 latest              9b8179770e78        5 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB
  • 上传镜像
[root@localhost ~]# docker push wholegale39/tomcat:latestThe push refers to repository [docker.io/wholegale39/tomcat]711749be7df9: Pushed 579be2cb5f3b: Pushed 015815b60df5: Pushed 2653d992f4ef: Mounted from library/centos latest: digest: sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3 size: 1163
  • 成功后查看镜像
image
  • 下载使用该镜像,所有用户都可下载使用
[root@localhost ~]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        6 hours ago         584MBwholegale39/tomcat                 latest              9b8179770e78        6 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB[root@localhost ~]# docker rmi wholegale39/tomcatUntagged: wholegale39/tomcat:latestUntagged: wholegale39/tomcat@sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3[root@localhost ~]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        6 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB[root@localhost ~]# docker pull wholegale39/tomcatUsing default tag: latestlatest: Pulling from wholegale39/tomcatDigest: sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3Status: Downloaded newer image for wholegale39/tomcat:latest[root@localhost ~]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        6 hours ago         584MBwholegale39/tomcat                 latest              9b8179770e78        6 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB

搭建本地Registry

  • 搭建本地registey服务
docker run -d -p 5000:5000 -v /home/wch/localRegistry:/var/lib/registry registryUnable to find image 'registry:latest' locallylatest: Pulling from library/registryddad3d7c1e96: Pull complete 6eda6749503f: Pull complete 363ab70c2143: Pull complete 5b94580856e6: Pull complete 12008541203a: Pull complete Digest: sha256:aba2bfe9f0cff1ac0618ec4a54bfefb2e685bbac67c8ebaf3b6405929b3e616fStatus: Downloaded newer image for registry:latestb7d56c751422ec434dd5217db4afac626fcf452b2d86554ea08126d8ee226cfb[root@localhost wch]# docker psCONTAINER ID        IMAGE                                     COMMAND                  CREATED             STATUS              PORTS                                        NAMESb7d56c751422        registry                                  "/entrypoint.sh /etc…"   8 seconds ago       Up 4 seconds        0.0.0.0:5000->5000/tcp                       happy_mcleanb43861a53e32        wch/tomcat                                "/bin/sh -c '/usr/lo…"   6 hours ago         Up 6 hours          0.0.0.0:8010->8080/tcp                       inspiring_rubin2649b0f316c3        quay.io/prometheus/node-exporter:latest   "/bin/node_exporter …"   5 days ago          Up 24 hours                                                      node_exporter314026ddbcc3        grafana/grafana:latest                    "/run.sh"                5 days ago          Up 24 hours         0.0.0.0:26->26/tcp, 0.0.0.0:3000->3000/tcp   grafana407fd7fc14a6        prom/prometheus:latest                    "/bin/prometheus --c…"   5 days ago          Up 24 hours         8086/tcp, 0.0.0.0:9090->9090/tcp             prometheus
  • 修改镜像
[root@localhost docker]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        6 hours ago         584MBwholegale39/tomcat                 latest              9b8179770e78        6 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBregistry                           latest              1fd8e1b0bb7e        2 months ago        26.2MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB
docker tag wholegale39/tomcat 192.168.9.140:5000/wholegale39/tomcat
[root@localhost docker]# docker imagesREPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE192.168.9.140:5000/wholegale39/tomcat   latest              9b8179770e78        6 hours ago         584MBwch/tomcat                              latest              9b8179770e78        6 hours ago         584MBwholegale39/tomcat                      latest              9b8179770e78        6 hours ago         584MBgrafana/grafana                         latest              b53df981d3aa        7 days ago          206MBprom/prometheus                         latest              86ea6f86fc57        4 weeks ago         185MBregistry                                latest              1fd8e1b0bb7e        2 months ago        26.2MBquay.io/prometheus/node-exporter        latest              c19ae228f069        3 months ago        26MBcentos                                  latest              300e315adb2f        6 months ago        209MB
  • 上传镜像
[root@localhost docker]# docker push 192.168.9.140:5000/wholegale39/tomcat:latestThe push refers to repository [192.168.9.140:5000/wholegale39/tomcat]711749be7df9: Pushed 579be2cb5f3b: Pushed 015815b60df5: Pushed 2653d992f4ef: Pushed latest: digest: sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3 size: 1163
  • 下载使用该镜像,所有内网用户都可下载使用
[root@localhost docker]# docker imagesREPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE192.168.9.140:5000/wholegale39/tomcat   latest              9b8179770e78        7 hours ago         584MBwch/tomcat                              latest              9b8179770e78        7 hours ago         584MBwholegale39/tomcat                      latest              9b8179770e78        7 hours ago         584MBgrafana/grafana                         latest              b53df981d3aa        7 days ago          206MBprom/prometheus                         latest              86ea6f86fc57        4 weeks ago         185MBregistry                                latest              1fd8e1b0bb7e        2 months ago        26.2MBquay.io/prometheus/node-exporter        latest              c19ae228f069        3 months ago        26MBcentos                                  latest              300e315adb2f        6 months ago        209MB[root@localhost docker]# docker rmi 192.168.9.140:5000/wholegale39/tomcatUntagged: 192.168.9.140:5000/wholegale39/tomcat:latestUntagged: 192.168.9.140:5000/wholegale39/tomcat@sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3[root@localhost docker]# docker pull 192.168.9.140:5000/wholegale39/tomcatUsing default tag: latestlatest: Pulling from wholegale39/tomcatDigest: sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3Status: Downloaded newer image for 192.168.9.140:5000/wholegale39/tomcat:latest[root@localhost docker]# docker imagesREPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE192.168.9.140:5000/wholegale39/tomcat   latest              9b8179770e78        7 hours ago         584MBwch/tomcat                              latest              9b8179770e78        7 hours ago         584MBwholegale39/tomcat                      latest              9b8179770e78        7 hours ago         584MBgrafana/grafana                         latest              b53df981d3aa        7 days ago          206MBprom/prometheus                         latest              86ea6f86fc57        4 weeks ago         185MBregistry                                latest              1fd8e1b0bb7e        2 months ago        26.2MBquay.io/prometheus/node-exporter        latest              c19ae228f069        3 months ago        26MBcentos                                  latest              300e315adb2f        6 months ago        209MB
  • 查看Registry中的Image信息
[root@localhost docker]# curl http://192.168.9.140:5000/v2/_catalog{"repositories":["wholegale39/tomcat"]}[root@localhost docker]# curl http://192.168.9.140:5000/v2/wholegale39/tomcat/tags/list{"name":"wholegale39/tomcat","tags":["latest"]}

常用命令

  • 查看当前运行的容器
[root@localhost ~]# docker psCONTAINER ID        IMAGE                                     COMMAND                  CREATED             STATUS              PORTS                                        NAMESb7d56c751422        registry                                  "/entrypoint.sh /etc…"   25 hours ago        Up 24 hours         0.0.0.0:5000->5000/tcp                       happy_mclean
  • 查看所有状态的容器
[root@localhost ~]# docker ps -aCONTAINER ID        IMAGE                                     COMMAND                  CREATED             STATUS                      PORTS                                        NAMESb7d56c751422        registry                                  "/entrypoint.sh /etc…"   25 hours ago        Up 24 hours                 0.0.0.0:5000->5000/tcp                       happy_mcleanb43861a53e32        wch/tomcat                                "/bin/sh -c '/usr/lo…"   31 hours ago        Exited (137) 24 hours ago                                                inspiring_rubin
  • 进入容器
[root@localhost ~]# docker exec -it CONTAINERID /bin/bash
  • 启动容器
[root@localhost ~]# docker start CONTAINERID
  • 停止容器
[root@localhost ~]# docker stop CONTAINERID
  • 重启容器
[root@localhost ~]# docker restart CONTAINERID
  • 查看日志
[root@localhost ~]# docker logs -f CONTAINERID
  • 暂停容器
[root@localhost ~]# docker pause CONTAINERID
  • 恢复暂停的容器
[root@localhost ~]# docker unpause CONTAINERID
  • 删除不在运行状态的容器
[root@localhost ~]# docker rm CONTAINERID
  • 删除指定未被使用的镜像
[root@localhost ~]# docker rmi IMAGEID
  • 删除所有未被使用的镜像
[root@xdja wch]# docker image prune -aWARNING! This will remove all images without at least one container associated to them.Are you sure you want to continue? [y/N] y
  • 存出镜像
[root@localhost ~]# docker save -0 tomcat wholegale39/tomcat
  • 其他机器载入镜像
[root@xdja wch]# docker load -i tomcat2653d992f4ef: Loading layer [==================================================>]  216.5MB/216.5MB015815b60df5: Loading layer [==================================================>]  360.4MB/360.4MB579be2cb5f3b: Loading layer [==================================================>]  15.27MB/15.27MB711749be7df9: Loading layer [==================================================>]  65.02kB/65.02kBLoaded image: wholegale39/tomcat:latest
  • 批量删除孤儿volume
[root@localhost ~]# docker volumerm $ (docker volume ls -q)
  • 复制
[root@localhost ~]# docker cp /home/wch containerID:/home/[root@localhost ~]# docker cp containerID:/home/ /home/wch

Docker网络

  • 查看网络
[root@localhost docker]# docker network lsNETWORK ID          NAME                         DRIVER              SCOPE0a6e7337301f        bridge                       bridge              locale558d63e1ee8        host                         host                localc7da7be15130        none                         null                local4965012c623e        prometheus_grafana_monitor   bridge              local

none网络,仅有lo网卡,一些对安全性要求高的应用可以使用

host网络:容器共享Docker host的网络栈,网络配置与host完全一样,最大的好处是性能较好,但是要考虑端口冲突问题

bridge网络:Docker守护进程创建了一个虚拟以太网桥docker0,附加在其上的任何网卡之间都能自动转发数据包。默认情况下,守护进程会创建一对对等接口,将其中一个接口设置为容器的eth0接口,另一个接口放置在宿主机的命名空间中,从而将宿主机上的所有容器都连接到这个内部网络上。同时,守护进程还会从网桥的私有地址空间中分配一个IP地址和子网给该容器。bridge模式是Docker的默认设置

  • 动态端口映射,将80端口映射到host动态端口
[root@localhost docker]# docker run -p 80 httpd
  • 指定端口映射,将80端口映射到host的8080端口
[root@localhost docker]# docker run -p 8080:80 httpd

每一个映射的端口,host都会启动一个docker-proxy进程来处理访问容器的流量

[root@localhost docker]# ps -ef|grep docker-proxyroot       910 16786  0 Jun23 ?        00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 5000 -container-ip 172.17.0.1 -container-port 5000root     17024 16786  0 Jun22 ?        00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 3000 -container-ip 172.26.0.2 -container-port 3000root     17038 16786  0 Jun22 ?        00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 26 -container-ip 172.26.0.2 -container-port 26root     17068 16786  0 Jun22 ?        00:01:57 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9090 -container-ip 172.26.0.3 -container-port 9090root     27721 17810  0 09:59 pts/0    00:00:00 grep --color=auto docker-proxy
image

跨主机网络方案包括:

1、docker原生的overlay和macvlan;

2、第三方方案:常用的包括flannel、weave和calico;

Overlay网络利用隧道技术,将数据包封装到UDP中进行传输。因为涉及数据包的封装和解封,存在额外的CPU和网络开销。虽然几乎所有Overlay网络方案底层都采用Linux kernel的vxlan模块,这样可以尽量减少开销,但这个开销与Underlay网络相比还是存在的。所以Macvlan、Flannel host-gw、Calico的性能会优于Docker overlay、Flannel vxlan和Weave。

Overlay较Underlay可以支持更多的二层网段,能更好地利用已有网络,以及有避免物理交换机MAC表耗尽等优势,所以在方案选型的时候需要综合考虑。

Docker存储

Docker为容器提供了两种存放数据的资源:

1、由storage driver管理的镜像层和容器层。

2、Data Volume。

storage driver

image

容器由最上面一个可写的容器层,以及若干只读的镜像层组成,容器的数据就存放在这些层中。这样的分层结构最大的特性是Copy-on-Write:

1、新数据会直接存放在最上面的容器层。

2、修改现有数据会先从镜像层将数据复制到容器层,修改后的数据直接保存在容器层中,镜像层保持不变。

3、如果多个层中有命名相同的文件,用户只能看到最上面那层中的文件。

分层结构使镜像和容器的创建、共享以及分发变得非常高效,而这些都要归功于Docker storagedriver。正是storage driver实现了多层数据的堆叠并为用户提供一个单一的合并之后的统一视图。

Docker支持多种storage driver,有AUFS、Device Mapper、Btrfs、OverlayFS、VFS和ZFS。它们都能实现分层的架构,同时又有各自的特性。对于Docker用户来说,具体选择使用哪个storagedriver是一个难题,因为:

1、没有哪个driver能够适应所有的场景。

2、driver本身在快速发展和迭代。

不过Docker官方给出了一个简单的答案:优先使用Linux发行版默认的storage driver。

  • Centos7.4系统
[root@localhost docker]# docker infoContainers: 5 Running: 3 Paused: 1 Stopped: 1Images: 14Server Version: 18.09.7Storage Driver: devicemapper
  • Ubuntu18系统
wch@ubuntu:~$ sudo docker infoClient: Debug Mode:falseServer: Containers: 0 Running: 0 Paused: 0 Stopped: 0Images: 0Server Version: 19.03.13Storage Driver: overlay2

对于某些容器,直接将数据放在由storage driver维护的层中是很好的选择,比如那些无状态的应用。无状态意味着容器没有需要持久化的数据,随时可以从镜像直接创建。

比如busybox,它是一个工具箱,启动busybox是为了执行诸如wget、ping之类的命令,不需要保存数据供以后使用,使用完直接退出,容器删除时存放在容器层中的工作数据也一起被删除,这没问题,下次再启动新容器即可。

但对于另一类应用这种方式就不合适了,它们有持久化数据的需求,容器启动时需要加载已有的数据,容器销毁时希望保留产生的新数据,也就是说,这类容器是有状态的。

这就要用到Docker的另一种存储机制:Data Volume。

Data Volume

Data Volume本质上是Docker Host文件系统中的目录或文件,能够直接被mount到容器的文件系统中。

Data Volume有以下特点:

1、Data Volume是目录或文件,而非没有格式化的磁盘(块设备)。

2、容器可以读写volume中的数据。

3、volume数据可以被永久地保存,即使使用它的容器已经销毁。

docker提供了两种类型的volume:bind mount和docker managed volume

  • bind mount
    • bind mount是将host上已存在的目录或文件mount到容器。
    • -v的格式为 :。/usr/local/apache2/htdocs就是Apache Server存放静态文件的地方。由于/usr/local/apache2/htdocs已经存在,原有数据会被隐藏起来,取而代之的是host /home/wch/docker/httpd/中的数据
[root@localhost httpd]# pwd/home/wch/docker/httpd[root@localhost httpd]# lltotal 4-rw-r--r-- 1 root root 72 Jun 24 15:17 index.html[root@localhost httpd]# cat index.html 

This is a file in host file system !

[root@localhost httpd]# docker run -d -p 80:80 -v /home/wch/docker/httpd:/usr/local/apache2/htdocs httpd275953f4f8bcc276dc83c63147a5d05582c4b216eb80855d12a1eb3d7da5baae[root@localhost httpd]# curl 127.0.0.1:80

This is a file in host file system !

[root@localhost httpd]# echo "update index page" > index.html[root@localhost httpd]# cat index.html update index page[root@localhost httpd]# curl 127.0.0.1:80update index page
# 默认是可读可写[root@localhost httpd]# docker run -d -p 80:80 -v /home/wch/docker/httpd:/usr/local/apache2/htdocs httpd# 可指定为只读,在容器中是无法对bind mount数据进行修改的,只有host有权修改数据[root@localhost httpd]# docker run -d -p 80:80 -v /home/wch/docker/httpd:/usr/local/apache2/htdocs:ro httpd
  • docker managed volume

docker managed volume与bind mount在使用上的最大区别是不需要指定mount源,指明mountpoint就行了

如果mount point指向的是已有目录,原有数据会被复制到host的volume中

[root@localhost httpd]# docker run -d -p 80:80 -v /usr/local/apache2/htdocs httpd6c0c6c8e15ebc5e99ff53d60a9e59994dc79909b80f1020f15271e9012958c64[root@localhost httpd]# docker inspect 6c0c6c8e15eb"Mounts": [            {                "Type": "volume",                "Name": "02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154",                "Source": "/var/lib/docker/volumes/02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154/_data",                "Destination": "/usr/local/apache2/htdocs",                "Driver": "local",                "Mode": "",                "RW": true,                "Propagation": ""            }        ]
[root@localhost httpd]# docker volume lsDRIVER              VOLUME NAMElocal               02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154local               0449d527e57c9b7b48789449fb02ae9c598db4d982a6c9af4f56cddea57a1b49[root@localhost httpd]# docker inspect 02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154[    {        "CreatedAt": "2021-06-24T15:35:00+08:00",        "Driver": "local",        "Labels": null,        "Mountpoint": "/var/lib/docker/volumes/02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154/_data",        "Name": "02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154",        "Options": null,        "Scope": "local"    }]
[root@localhost httpd]# ls -l /var/lib/docker/volumes/02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154/_datatotal 4-rw-r--r-- 1 mysql mysql 45 Jun 12  2007 index.html[root@localhost httpd]# cat /var/lib/docker/volumes/02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154/_data/index.html 

It works!

[root@localhost httpd]# curl 127.0.0.1:80

It works!

# 对于docker managed volume,在执行docker rm删除容器时可以带上-v参数,docker会将容器使用到的volume一并删除,但前提是没有其他容器mount该volume[root@localhost httpd]# docker rm -v 6c0c6c8e15eb

数据共享

  • 容器与host共享数据

    • bind mount直接将要共享的目录mount到容器
    • docker managed volume
    [root@localhost httpd]# curl 127.0.0.1:80

    It works!

    [root@localhost httpd]# docker cp /home/wch/docker/httpd/index.html 6c0c6c8e15eb:/usr/local/apache2/htdocs[root@localhost httpd]# curl 127.0.0.1:80This is a new index page for web cluster[root@localhost httpd]# cat /var/lib/docker/volumes/02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154/_data/index.html This is a new index page for web cluster
  • 容器之间共享数据

[root@localhost httpd]# docker run --name web1 -d -p 80 -v /home/wch/docker/httpd/:/usr/local/apache2/htdocs httpd2126366ffe2cb5aca7b97012b41779b7963ca41c4afd797a992d8a3c2e471ab4[root@localhost httpd]# docker run --name web2 -d -p 80 -v /home/wch/docker/httpd/:/usr/local/apache2/htdocs httpd03a859cfda48a472ff28c313638c6054633e30e7ed77d17d0919a6e95ecd164f[root@localhost httpd]# docker run --name web3 -d -p 80 -v /home/wch/docker/httpd/:/usr/local/apache2/htdocs httpd27483f6f7ccccce086594501d21e0b9eef1fdcc9f3145dd1a36e0c9c7910322a[root@localhost httpd]# docker psCONTAINER ID        IMAGE                                     COMMAND                  CREATED             STATUS                 PORTS                                        NAMES27483f6f7ccc        httpd                                     "httpd-foreground"       8 seconds ago       Up 5 seconds           0.0.0.0:1026->80/tcp                         web303a859cfda48        httpd                                     "httpd-foreground"       17 seconds ago      Up 14 seconds          0.0.0.0:1025->80/tcp                         web22126366ffe2c        httpd                                     "httpd-foreground"       29 seconds ago      Up 26 seconds          0.0.0.0:1024->80/tcp                         web1
[root@localhost httpd]# curl 127.0.0.1:1024update index page[root@localhost httpd]# curl 127.0.0.1:1025update index page[root@localhost httpd]# curl 127.0.0.1:1026update index page
[root@localhost httpd]# echo "This is a new index page for web cluster" > index.html [root@localhost httpd]# curl 127.0.0.1:1024This is a new index page for web cluster[root@localhost httpd]# curl 127.0.0.1:1025This is a new index page for web cluster[root@localhost httpd]# curl 127.0.0.1:1026This is a new index page for web cluster
  • volume container
    • bind mount,存放Web Server的静态文件
    • docker managed volume,存放一些实用工具(当然现在是空的,这里只是做个示例)
# docker create命令,这是因为volume container的作用只是提供数据,它本身不需要处于运行状态[root@localhost httpd]# docker create --name vc_data -v /home/wch/docker/httpd:/usr/local/apache2/htdocs -v /other/useful/tools busyboxUnable to find image 'busybox:latest' locallylatest: Pulling from library/busyboxb71f96345d44: Pull complete Digest: sha256:930490f97e5b921535c153e0e7110d251134cc4b72bbb8133c6a5065cc68580dStatus: Downloaded newer image for busybox:latest948a7dd94baf96c7b6291d4830df7d314a65680c687bad52ece2432e1190ee55
 [root@localhost httpd]# docker inspect vc_data "Mounts": [            {                "Type": "bind",                "Source": "/home/wch/docker/httpd",                "Destination": "/usr/local/apache2/htdocs",                "Mode": "",                "RW": true,                "Propagation": "rprivate"            },            {                "Type": "volume",                "Name": "9ea52d28e5824755983b45ebd1a28ea220eecadd2e653e3537143191dd97578f",                "Source": "/var/lib/docker/volumes/9ea52d28e5824755983b45ebd1a28ea220eecadd2e653e3537143191dd97578f/_data",                "Destination": "/other/useful/tools",                "Driver": "local",                "Mode": "",                "RW": true,                "Propagation": ""            }
# 其他容器可以通过--volumes-from使用vc_data这个volume container[root@localhost httpd]# docker run --name web4 -d -p 80 --volumes-from vc_data httpdc9e05ea4c552687c79f00698ae56f1ab2c4654192105db309d09dd41eb3fcbee[root@localhost httpd]# docker inspect web4"Mounts": [            {                "Type": "bind",                "Source": "/home/wch/docker/httpd",                "Destination": "/usr/local/apache2/htdocs",                "Mode": "",                "RW": true,                "Propagation": "rprivate"            },            {                "Type": "volume",                "Name": "9ea52d28e5824755983b45ebd1a28ea220eecadd2e653e3537143191dd97578f",                "Source": "/var/lib/docker/volumes/9ea52d28e5824755983b45ebd1a28ea220eecadd2e653e3537143191dd97578f/_data",                "Destination": "/other/useful/tools",                "Driver": "local",                "Mode": "",                "RW": true,                "Propagation": ""            }        ],
  • data-packed volume container

原理是将数据打包到镜像中,然后通过docker managed volume共享

容器能够正确读取volume中的数据。data-packed volume container是自包含的,不依赖host提供数据,具有很强的移植性,非常适合只使用静态数据的场景,比如应用的配置信息、Web server的静态文件等。

[root@localhost httpd]# pwd/home/wch/httpd[root@localhost httpd]# lltotal 4-rw-r--r-- 1 root root 91 Jun 24 17:00 Dockerfiledrwxr-xr-x 2 root root 23 Jun 24 16:57 htdocs
[root@localhost httpd]# docker build -t datapacked .Sending build context to Docker daemon  3.584kBStep 1/3 : FROM busybox:latest ---> 69593048aa3aStep 2/3 : ADD htdocs /usr/local/apache2/htdocs ---> aa1f4298814eStep 3/3 : VOLUME /usr/local/apache2/htdocs ---> Running in 71362c795108Removing intermediate container 71362c795108 ---> cb8ced11e74cSuccessfully built cb8ced11e74cSuccessfully tagged datapacked:latest
[root@localhost httpd]# docker run -d -p 80 --volumes-from vc_data2 httpdb9da47ebcf64477c77fed8bb85613765485624b20161daf1508b56e326880447[root@localhost httpd]# curl 127.0.0.1:1028This is a new index page for web cluster

多主机管理

Docker Machine 是一种可以让您在虚拟主机上安装 Docker 的工具,并可以使用 docker-machine 命令来管理主机。

Docker Machine 也可以集中管理所有的 docker 主机,比如快速的给 100 台服务器安装上 docker。

安装

[root@localhost httpd]# curl -L https://github.com/docker/machine/releases/download/v0.16.2/docker-machine-`uname -s`-`uname -m` >/tmp/docker-machine && chmod +x /tmp/docker-machine &$ sudo cp /tmp/docker-machine /usr/local/bin/docker-machine
[root@localhost httpd]# docker-machine -vdocker-machine version 0.16.2, build bd45ab13# 安装自动补全功能[root@localhost httpd]# yum -y install bash-completion

配置管理

  • 查看一下当前的machine
[root@localhost httpd]# docker-machine lsNAME   ACTIVE   DRIVER   STATE   URL   SWARM   DOCKER   ERRORS
  • 配置免密码登陆
# 一路回车创建生成keys[root@localhost httpd]# ssh-keygen# 将keys拷贝到client1上去[root@localhost httpd]# ssh-copy-id 192.168.9.31# 测试是否可以免密登录[root@localhost httpd]# ssh [email protected]
  • 创建machine
[root@localhost httpd]# docker-machine create --driver generic --generic-ip-address=192.168.9.31 client1Running pre-create checks...Creating machine...(client1) No SSH key specified. Assuming an existing key at the default location.Waiting for machine to be running, this may take a few minutes...Detecting operating system of created instance...Waiting for SSH to be available...Detecting the provisioner...Provisioning with centos...Copying certs to the local machine directory...Copying certs to the remote machine...Setting Docker configuration on the remote daemon...Checking connection to Docker...Docker is up and running!To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env client1
  • 查看一下当前的machine
[root@localhost httpd]# docker-machine lsNAME      ACTIVE   DRIVER    STATE     URL                       SWARM   DOCKER        ERRORSclient1   -        generic   Running   tcp://192.168.9.31:2376           v18.06.3-ce   
  • 访问client1所有环境变量
[root@localhost docker]# docker-machine env client1export DOCKER_TLS_VERIFY="1"export DOCKER_HOST="tcp://192.168.9.31:2376"export DOCKER_CERT_PATH="/root/.docker/machine/machines/client1"export DOCKER_MACHINE_NAME="client1"# Run this command to configure your shell: # eval $(docker-machine env client1)
  • 切换到client1上进行操作
[root@localhost docker]# eval $(docker-machine env client1)[root@localhost docker]# docker imagesREPOSITORY                       TAG                 IMAGE ID            CREATED             SIZEwholegale39/tomcat               latest              9b8179770e78        2 days ago          584MB
  • 其他命令
[root@localhost docker]# docker-machine version client118.06.3-ce[root@localhost docker]# docker-machine status client1Running

容器监控

自带命令工具

[root@client1 docker]# docker ps
[root@client1 docker]# docker container ls[root@localhost ~]# docker container ls -a
[root@localhost ~]# docker container top containerID
[root@localhost ~]# docker stats

sysdig

Sysdig 是 Sysdig Cloud 开发的主要基于Lua语言的一个开源系统分析工具。Sysdig 能从运行的系统中,获取系统状态和行为,做过滤分析,功能上超同类开源工具。Sysdig 可以看做是 strace + tcpdump + lsof + htop + iftop 以及其他系统分析工具的合集 。

  • 安装
[root@localhost ~]# docker run -i -t --name sysdig --privileged -v /var/run/docker.sock:/host/var/run/docker.sock -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro sysdig/sysdig
root@6d57b899e866:/# csysdig
sysdig总览
容器监控

Weave Scope

Weave Scope 用于监控、可视化和管理 Docker 以及 Kubernetes。

Weave Scope 这个项目会自动生成容器之间的关系图,方便理解容器之间的关系,也方便监控容器化和微服务化的应用。

  • 安装
# 下载最新版本[root@localhost ~]# sudo curl -L https://github.com/weaveworks/scope/releases/download/latest_release/scope -o /usr/local/bin/scope# 赋予权限[root@localhost ~]# sudo chmod a+x /usr/local/bin/scope# scope launch将以容器方式启动Weave Scope并增加用户名和密码,提高安全性[root@localhost ~]# scope launch -app.basicAuth -app.basicAuth.password 123456 -app.basicAuth.username user -probe.basicAuth -probe.basicAuth.password 123456 -probe.basicAuth.username user
  • 浏览器访问http://[Host IP]:4040/,可对容器进行任意操作,功能非常强大
weave scope
host监控
  • 多主机监控,在多台机器上按照上述命令安装成功
# 首先在多台机器上分别停止weave scope容器服务[root@client1 ~]# docker stop 1215c4a1d22e# 分别在多台机器上执行[root@localhost ~]# scope launch 192.168.9.31 192.168.9.1405023feeda6c0e299c6c56cf7f1e1a4be1c9b8532a591f1aa326fbf8c75c4d561Scope probe startedWeave Scope is listening at the following URL(s):
多主机监控

cAdvisor

  • 详细请参考【Prometheus&Grafana性能监控】文章

Prometheus

  • 详细请参考【Prometheus&Grafana性能监控】文章

监控工具对比

关注点/方案 Docker ps/top/stats sysdig WeaveScope cAvisor Prometheus
部署难易程度 sssss sssss ssss sssss sss
数据详细度 sss sssss sssss sss sssss
多Host监控 none none sssss none sssss
告警功能 none none none none ssss
监控非容器资源 none sss sss ss sssss

s为strong缩写

容器日志管理

Docker logs

  • attach,看不到之前的日志,只能看后续日志,并且退出操作比较繁琐
[root@localhost ~]# docker attach containerID
  • logs
[root@localhost ~]# docker logs -f containerID

Docker logging driver

将容器日志发送到STDOUT和STDERR是Docker的默认日志行为。实际上,Docker提供了多种日志机制帮助用户从运行的容器中提取日志信息,这些机制被称作logging driver。

Docker的默认logging driver是json-file。

[root@localhost ~]# cat /var/lib/docker/containers/03a859cfda48a472ff28c313638c6054633e30e7ed77d17d0919a6e95ecd164f/03a859cfda48a472ff28c313638c6054633e30e7ed77d17d0919a6e95ecd164f-json.log

ELK

Filebeat是用于转发和集中日志数据的轻量级传送工具。Filebeat监视您指定的日志文件或位置,收集日志事件,并将它们转发到Elasticsearch或 Logstash进行索引。也有收集网络流量数据、收集系统、进程和文件系统级别的 CPU 和内存使用情况等数据、收集 Windows 事件日志数据、收集审计日志、收集系统运行时的数据等beat。

Logstash,读取原始日志,并对其进行分析和过滤,然后将其转发给其他组件(比如Elasticsearch)进行索引或存储。Logstash支持丰富的Input和Output类型,能够处理各种应用的日志。jvm跑的,资源消耗比较大

Elasticsearch,一个近乎实时查询的全文搜索引擎。Elasticsearch的设计目标就是要能够处理和搜索巨量的日志数据。

Kibana,一个基于JavaScript的Web图形界面程序,专门用于可视化Elasticsearch的数据。Kibana能够查询Elasticsearch并通过丰富的图表展示结果。用户可以创建Dashboard来监控系统的日志。

Filebeat>Kafka集群>Logstash集群>Elasticsearch集群>Kibana

  • Git Clone 命令下载项目
[root@localhost docker-elk]# git clone https://github.com/deviantony/docker-elk.git
  • 安装
[root@localhost docker-elk]# docker-compose upBuilding elasticsearchSending build context to Docker daemon  3.584kBStep 1/2 : ARG ELK_VERSIONStep 2/2 : FROM docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}7.13.2: Pulling from elasticsearch/elasticsearchddf49b9115d7: Already exists 815a15889ec1: Pull complete ba5d33fc5cc5: Pull complete 976d4f887b1a: Extracting [==============>                                    ]  104.7MB/354.9MB9b5ee4563932: Download complete ef11e8f17d0c: Download complete 3c5ad4db1e24: Download complete 
  • 重置密码
[root@localhost docker-elk]# docker-compose exec -T elasticsearch bin/elasticsearch-setup-passwords auto --batchChanged password for user apm_systemPASSWORD apm_system = 4OHYCFm7yZhsVG5tQDflChanged password for user kibana_systemPASSWORD kibana_system = oksG2cfrYEFDFqzPLpu3Changed password for user kibanaPASSWORD kibana = oksG2cfrYEFDFqzPLpu3Changed password for user logstash_systemPASSWORD logstash_system = nHU6m8iuBoGKpHI4Yt1pChanged password for user beats_systemPASSWORD beats_system = YTjhnmgKxLlTVOY8V9PJChanged password for user remote_monitoring_userPASSWORD remote_monitoring_user = eihRRu2eDt05zY7AbqYuChanged password for user elasticPASSWORD elastic = fpgKWAI6tkQKkS8c8zzD
  • 修改以下配置文件中用户elastic对应的密码
kibana/config/kibana.ymllogstash/config/logstash.ymllogstash/pipeline/logstash.conf
  • 重启服务
[root@localhost docker-elk]# docker-compose restartRestarting docker-elk_logstash_1      ... doneRestarting docker-elk_kibana_1        ... doneRestarting docker-elk_elasticsearch_1 ... done
日志数据

https://blog.csdn.net/soulteary/article/details/105921729

Graylog

Graylog是一个开源的日志聚合、分析、审计、展现和预警工具。功能上和ELK类似,但又比ELK要简单,依靠着更加简洁,高效,部署使用简单的优势很快受到许多人的青睐。

  • 创建配置文件

https://raw.githubusercontent.com/Graylog2/graylog-docker/4.1/config/log4j2.xml

https://raw.githubusercontent.com/Graylog2/graylog-docker/4.1/config/graylog.conf

  • 创建graylog.conf文件
############################# GRAYLOG CONFIGURATION FILE############################## This is the Graylog configuration file. The file has to use ISO 8859-1/Latin-1 character encoding.# Characters that cannot be directly represented in this encoding can be written using Unicode escapes# as defined in https://docs.oracle.com/javase/specs/jls/se8/html/jls-3.html#jls-3.3, using the \u prefix.# For example, \u002c.## * Entries are generally expected to be a single line of the form, one of the following:## propertyName=propertyValue# propertyName:propertyValue## * White space that appears between the property name and property value is ignored,#   so the following are equivalent:## name=Stephen# name = Stephen## * White space at the beginning of the line is also ignored.## * Lines that start with the comment characters ! or # are ignored. Blank lines are also ignored.## * The property value is generally terminated by the end of the line. White space following the#   property value is not ignored, and is treated as part of the property value.## * A property value can span several lines if each line is terminated by a backslash (‘\’) character.#   For example:## targetCities=\#         Detroit,\#         Chicago,\#         Los Angeles##   This is equivalent to targetCities=Detroit,Chicago,Los Angeles (white space at the beginning of lines is ignored).## * The characters newline, carriage return, and tab can be inserted with characters \n, \r, and \t, respectively.## * The backslash character must be escaped as a double backslash. For example:## path=c:\\docs\\doc1## If you are running more than one instances of Graylog server you have to select one of these# instances as master. The master will perform some periodical tasks that non-masters won't perform.is_master = true# The auto-generated node ID will be stored in this file and read after restarts. It is a good idea# to use an absolute file path here if you are starting Graylog server from init scripts or similar.node_id_file = /usr/share/graylog/data/config/node-id# You MUST set a secret to secure/pepper the stored user passwords here. Use at least 64 characters.# Generate one by using for example: pwgen -N 1 -s 96# ATTENTION: This value must be the same on all Graylog nodes in the cluster.# Changing this value after installation will render all user sessions and encrypted values in the database invalid. (e.g. encrypted access tokens)password_secret = replacethiswithyourownsecret!# The default root user is named 'admin'#root_username = admin# You MUST specify a hash password for the root user (which you only need to initially set up the# system and in case you lose connectivity to your authentication backend)# This password cannot be changed using the API or via the web interface. If you need to change it,# modify it in this file.# Create one by using for example: echo -n yourpassword | shasum -a 256# and put the resulting hash value into the following line# CHANGE THIS!root_password_sha2 = 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918# The email address of the root user.# Default is empty#root_email = ""# The time zone setting of the root user. See http://www.joda.org/joda-time/timezones.html for a list of valid time zones.# Default is UTC#root_timezone = UTC# Set the bin directory here (relative or absolute)# This directory contains binaries that are used by the Graylog server.# Default: binbin_dir = /usr/share/graylog/bin# Set the data directory here (relative or absolute)# This directory is used to store Graylog server state.# Default: datadata_dir = /usr/share/graylog/data# Set plugin directory here (relative or absolute)plugin_dir = /usr/share/graylog/plugin################ HTTP settings################### HTTP bind address## The network interface used by the Graylog HTTP interface.## This network interface must be accessible by all Graylog nodes in the cluster and by all clients# using the Graylog web interface.## If the port is omitted, Graylog will use port 9000 by default.## Default: 127.0.0.1:9000#http_bind_address = 127.0.0.1:9000#http_bind_address = [2001:db8::1]:9000http_bind_address = 0.0.0.0:9000#### HTTP publish URI## The HTTP URI of this Graylog node which is used to communicate with the other Graylog nodes in the cluster and by all# clients using the Graylog web interface.## The URI will be published in the cluster discovery APIs, so that other Graylog nodes will be able to find and connect to this Graylog node.## This configuration setting has to be used if this Graylog node is available on another network interface than $http_bind_address,# for example if the machine has multiple network interfaces or is behind a NAT gateway.## If $http_bind_address contains a wildcard IPv4 address (0.0.0.0), the first non-loopback IPv4 address of this machine will be used.# This configuration setting *must not* contain a wildcard address!## Default: http://$http_bind_address/#http_publish_uri = http://192.168.1.1:9000/#### External Graylog URI## The public URI of Graylog which will be used by the Graylog web interface to communicate with the Graylog REST API.## The external Graylog URI usually has to be specified, if Graylog is running behind a reverse proxy or load-balancer# and it will be used to generate URLs addressing entities in the Graylog REST API (see $http_bind_address).## When using Graylog Collector, this URI will be used to receive heartbeat messages and must be accessible for all collectors.## This setting can be overriden on a per-request basis with the "X-Graylog-Server-URL" HTTP request header.## Default: $http_publish_uri#http_external_uri =#### Enable CORS headers for HTTP interface## This allows browsers to make Cross-Origin requests from any origin.# This is disabled for security reasons and typically only needed if running graylog# with a separate server for frontend development.## Default: false#http_enable_cors = false#### Enable GZIP support for HTTP interface## This compresses API responses and therefore helps to reduce# overall round trip times. This is enabled by default. Uncomment the next line to disable it.#http_enable_gzip = false# The maximum size of the HTTP request headers in bytes.#http_max_header_size = 8192# The size of the thread pool used exclusively for serving the HTTP interface.#http_thread_pool_size = 16################# HTTPS settings#################### Enable HTTPS support for the HTTP interface## This secures the communication with the HTTP interface with TLS to prevent request forgery and eavesdropping.## Default: false#http_enable_tls = true# The X.509 certificate chain file in PEM format to use for securing the HTTP interface.#http_tls_cert_file = /path/to/graylog.crt# The PKCS#8 private key file in PEM format to use for securing the HTTP interface.#http_tls_key_file = /path/to/graylog.key# The password to unlock the private key used for securing the HTTP interface.#http_tls_key_password = secret# Comma separated list of trusted proxies that are allowed to set the client address with X-Forwarded-For# header. May be subnets, or hosts.#trusted_proxies = 127.0.0.1/32, 0:0:0:0:0:0:0:1/128# List of Elasticsearch hosts Graylog should connect to.# Need to be specified as a comma-separated list of valid URIs for the http ports of your elasticsearch nodes.# If one or more of your elasticsearch hosts require authentication, include the credentials in each node URI that# requires authentication.## Default: http://127.0.0.1:9200#elasticsearch_hosts = http://node1:9200,http://user:password@node2:19200elasticsearch_hosts = http://elasticsearch:9200# Maximum amount of time to wait for successfull connection to Elasticsearch HTTP port.## Default: 10 Seconds#elasticsearch_connect_timeout = 10s# Maximum amount of time to wait for reading back a response from an Elasticsearch server.# (e. g. during search, index creation, or index time-range calculations)## Default: 60 seconds#elasticsearch_socket_timeout = 60s# Maximum idle time for an Elasticsearch connection. If this is exceeded, this connection will# be tore down.## Default: inf#elasticsearch_idle_timeout = -1s# Maximum number of total connections to Elasticsearch.## Default: 200#elasticsearch_max_total_connections = 200# Maximum number of total connections per Elasticsearch route (normally this means per# elasticsearch server).## Default: 20#elasticsearch_max_total_connections_per_route = 20# Maximum number of times Graylog will retry failed requests to Elasticsearch.## Default: 2#elasticsearch_max_retries = 2# Enable automatic Elasticsearch node discovery through Nodes Info,# see https://www.elastic.co/guide/en/elasticsearch/reference/5.4/cluster-nodes-info.html## WARNING: Automatic node discovery does not work if Elasticsearch requires authentication, e. g. with Shield.## Default: false#elasticsearch_discovery_enabled = true# Filter for including/excluding Elasticsearch nodes in discovery according to their custom attributes,# see https://www.elastic.co/guide/en/elasticsearch/reference/5.4/cluster.html#cluster-nodes## Default: empty#elasticsearch_discovery_filter = rack:42# Frequency of the Elasticsearch node discovery.## Default: 30s# elasticsearch_discovery_frequency = 30s# Set the default scheme when connecting to Elasticsearch discovered nodes## Default: http (available options: http, https)#elasticsearch_discovery_default_scheme = http# Enable payload compression for Elasticsearch requests.## Default: false#elasticsearch_compression_enabled = true# Enable use of "Expect: 100-continue" Header for Elasticsearch index requests.# If this is disabled, Graylog cannot properly handle HTTP 413 Request Entity Too Large errors.## Default: true#elasticsearch_use_expect_continue = true# Graylog will use multiple indices to store documents in. You can configured the strategy it uses to determine# when to rotate the currently active write index.# It supports multiple rotation strategies:#   - "count" of messages per index, use elasticsearch_max_docs_per_index below to configure#   - "size" per index, use elasticsearch_max_size_per_index below to configure# valid values are "count", "size" and "time", default is "count"## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.rotation_strategy = count# (Approximate) maximum number of documents in an Elasticsearch index before a new index# is being created, also see no_retention and elasticsearch_max_number_of_indices.# Configure this if you used 'rotation_strategy = count' above.## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.elasticsearch_max_docs_per_index = 20000000# (Approximate) maximum size in bytes per Elasticsearch index on disk before a new index is being created, also see# no_retention and elasticsearch_max_number_of_indices. Default is 1GB.# Configure this if you used 'rotation_strategy = size' above.## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.#elasticsearch_max_size_per_index = 1073741824# (Approximate) maximum time before a new Elasticsearch index is being created, also see# no_retention and elasticsearch_max_number_of_indices. Default is 1 day.# Configure this if you used 'rotation_strategy = time' above.# Please note that this rotation period does not look at the time specified in the received messages, but is# using the real clock value to decide when to rotate the index!# Specify the time using a duration and a suffix indicating which unit you want:#  1w  = 1 week#  1d  = 1 day#  12h = 12 hours# Permitted suffixes are: d for day, h for hour, m for minute, s for second.## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.#elasticsearch_max_time_per_index = 1d# Disable checking the version of Elasticsearch for being compatible with this Graylog release.# WARNING: Using Graylog with unsupported and untested versions of Elasticsearch may lead to data loss!#elasticsearch_disable_version_check = true# Disable message retention on this node, i. e. disable Elasticsearch index rotation.#no_retention = false# How many indices do you want to keep?## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.elasticsearch_max_number_of_indices = 5# Decide what happens with the oldest indices when the maximum number of indices is reached.# The following strategies are availble:#   - delete # Deletes the index completely (Default)#   - close # Closes the index and hides it from the system. Can be re-opened later.## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.retention_strategy = delete# How many Elasticsearch shards and replicas should be used per index? Note that this only applies to newly created indices.# ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.elasticsearch_shards = 1elasticsearch_replicas = 0# Prefix for all Elasticsearch indices and index aliases managed by Graylog.## ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.elasticsearch_index_prefix = graylog# Name of the Elasticsearch index template used by Graylog to apply the mandatory index mapping.# Default: graylog-internal## ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.#elasticsearch_template_name = graylog-internal# Do you want to allow searches with leading wildcards? This can be extremely resource hungry and should only# be enabled with care. See also: http://docs.graylog.org/en/2.1/pages/queries.htmlallow_leading_wildcard_searches = false# Do you want to allow searches to be highlighted? Depending on the size of your messages this can be memory hungry and# should only be enabled after making sure your Elasticsearch cluster has enough memory.allow_highlighting = false# Analyzer (tokenizer) to use for message and full_message field. The "standard" filter usually is a good idea.# All supported analyzers are: standard, simple, whitespace, stop, keyword, pattern, language, snowball, custom# Elasticsearch documentation: https://www.elastic.co/guide/en/elasticsearch/reference/2.3/analysis.html# Note that this setting only takes effect on newly created indices.## ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.elasticsearch_analyzer = standard# Global timeout for index optimization (force merge) requests.# Default: 1h#elasticsearch_index_optimization_timeout = 1h# Maximum number of concurrently running index optimization (force merge) jobs.# If you are using lots of different index sets, you might want to increase that number.# Default: 20#elasticsearch_index_optimization_jobs = 20# Time interval for index range information cleanups. This setting defines how often stale index range information# is being purged from the database.# Default: 1h#index_ranges_cleanup_interval = 1h# Time interval for the job that runs index field type maintenance tasks like cleaning up stale entries. This doesn't# need to run very often.# Default: 1h#index_field_type_periodical_interval = 1h# Batch size for the Elasticsearch output. This is the maximum (!) number of messages the Elasticsearch output# module will get at once and write to Elasticsearch in a batch call. If the configured batch size has not been# reached within output_flush_interval seconds, everything that is available will be flushed at once. Remember# that every outputbuffer processor manages its own batch and performs its own batch write calls.# ("outputbuffer_processors" variable)output_batch_size = 500# Flush interval (in seconds) for the Elasticsearch output. This is the maximum amount of time between two# batches of messages written to Elasticsearch. It is only effective at all if your minimum number of messages# for this time period is less than output_batch_size * outputbuffer_processors.output_flush_interval = 1# As stream outputs are loaded only on demand, an output which is failing to initialize will be tried over and# over again. To prevent this, the following configuration options define after how many faults an output will# not be tried again for an also configurable amount of seconds.output_fault_count_threshold = 5output_fault_penalty_seconds = 30# The number of parallel running processors.# Raise this number if your buffers are filling up.processbuffer_processors = 5outputbuffer_processors = 3# The following settings (outputbuffer_processor_*) configure the thread pools backing each output buffer processor.# See https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html for technical details# When the number of threads is greater than the core (see outputbuffer_processor_threads_core_pool_size),# this is the maximum time in milliseconds that excess idle threads will wait for new tasks before terminating.# Default: 5000#outputbuffer_processor_keep_alive_time = 5000# The number of threads to keep in the pool, even if they are idle, unless allowCoreThreadTimeOut is set# Default: 3#outputbuffer_processor_threads_core_pool_size = 3# The maximum number of threads to allow in the pool# Default: 30#outputbuffer_processor_threads_max_pool_size = 30# UDP receive buffer size for all message inputs (e. g. SyslogUDPInput).#udp_recvbuffer_sizes = 1048576# Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping)# Possible types:#  - yielding#     Compromise between performance and CPU usage.#  - sleeping#     Compromise between performance and CPU usage. Latency spikes can occur after quiet periods.#  - blocking#     High throughput, low latency, higher CPU usage.#  - busy_spinning#     Avoids syscalls which could introduce latency jitter. Best when threads can be bound to specific CPU cores.processor_wait_strategy = blocking# Size of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore.# For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache.# Must be a power of 2. (512, 1024, 2048, ...)ring_size = 65536inputbuffer_ring_size = 65536inputbuffer_processors = 2inputbuffer_wait_strategy = blocking# Enable the disk based message journal.message_journal_enabled = true# The directory which will be used to store the message journal. The directory must be exclusively used by Graylog and# must not contain any other files than the ones created by Graylog itself.## ATTENTION:#   If you create a seperate partition for the journal files and use a file system creating directories like 'lost+found'#   in the root directory, you need to create a sub directory for your journal.#   Otherwise Graylog will log an error message that the journal is corrupt and Graylog will not start.message_journal_dir = data/journal# Journal hold messages before they could be written to Elasticsearch.# For a maximum of 12 hours or 5 GB whichever happens first.# During normal operation the journal will be smaller.#message_journal_max_age = 12h#message_journal_max_size = 5gb#message_journal_flush_age = 1m#message_journal_flush_interval = 1000000#message_journal_segment_age = 1h#message_journal_segment_size = 100mb# Number of threads used exclusively for dispatching internal events. Default is 2.#async_eventbus_processors = 2# How many seconds to wait between marking node as DEAD for possible load balancers and starting the actual# shutdown process. Set to 0 if you have no status checking load balancers in front.lb_recognition_period_seconds = 3# Journal usage percentage that triggers requesting throttling for this server node from load balancers. The feature is# disabled if not set.#lb_throttle_threshold_percentage = 95# Every message is matched against the configured streams and it can happen that a stream contains rules which# take an unusual amount of time to run, for example if its using regular expressions that perform excessive backtracking.# This will impact the processing of the entire server. To keep such misbehaving stream rules from impacting other# streams, Graylog limits the execution time for each stream.# The default values are noted below, the timeout is in milliseconds.# If the stream matching for one stream took longer than the timeout value, and this happened more than "max_faults" times# that stream is disabled and a notification is shown in the web interface.#stream_processing_timeout = 2000#stream_processing_max_faults = 3# Since 0.21 the Graylog server supports pluggable output modules. This means a single message can be written to multiple# outputs. The next setting defines the timeout for a single output module, including the default output module where all# messages end up.## Time in milliseconds to wait for all message outputs to finish writing a single message.#output_module_timeout = 10000# Time in milliseconds after which a detected stale master node is being rechecked on startup.#stale_master_timeout = 2000# Time in milliseconds which Graylog is waiting for all threads to stop on shutdown.#shutdown_timeout = 30000# MongoDB connection string# See https://docs.mongodb.com/manual/reference/connection-string/ for details#mongodb_uri = mongodb://localhost/graylogmongodb_uri = mongodb://mongo/graylog# Authenticate against the MongoDB server# '+'-signs in the username or password need to be replaced by '%2B'#mongodb_uri = mongodb://grayloguser:secret@localhost:27017/graylog# Use a replica set instead of a single host#mongodb_uri = mongodb://grayloguser:secret@localhost:27017,localhost:27018,localhost:27019/graylog?replicaSet=rs01# DNS Seedlist https://docs.mongodb.com/manual/reference/connection-string/#dns-seedlist-connection-format#mongodb_uri = mongodb+srv://server.example.org/graylog# Increase this value according to the maximum connections your MongoDB server can handle from a single client# if you encounter MongoDB connection problems.mongodb_max_connections = 1000# Number of threads allowed to be blocked by MongoDB connections multiplier. Default: 5# If mongodb_max_connections is 100, and mongodb_threads_allowed_to_block_multiplier is 5,# then 500 threads can block. More than that and an exception will be thrown.# http://api.mongodb.com/java/current/com/mongodb/MongoOptions.html#threadsAllowedToBlockForConnectionMultipliermongodb_threads_allowed_to_block_multiplier = 5# Email transport#transport_email_enabled = false#transport_email_hostname = mail.example.com#transport_email_port = 587#transport_email_use_auth = true#transport_email_auth_username = [email protected]#transport_email_auth_password = secret#transport_email_subject_prefix = [graylog]#transport_email_from_email = [email protected]# Encryption settings## ATTENTION:#    Using SMTP with STARTTLS *and* SMTPS at the same time is *not* possible.# Use SMTP with STARTTLS, see https://en.wikipedia.org/wiki/Opportunistic_TLS#transport_email_use_tls = true# Use SMTP over SSL (SMTPS), see https://en.wikipedia.org/wiki/SMTPS# This is deprecated on most SMTP services!#transport_email_use_ssl = false# Specify and uncomment this if you want to include links to the stream in your stream alert mails.# This should define the fully qualified base url to your web interface exactly the same way as it is accessed by your users.#transport_email_web_interface_url = https://graylog.example.com# The default connect timeout for outgoing HTTP connections.# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).# Default: 5s#http_connect_timeout = 5s# The default read timeout for outgoing HTTP connections.# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).# Default: 10s#http_read_timeout = 10s# The default write timeout for outgoing HTTP connections.# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).# Default: 10s#http_write_timeout = 10s# HTTP proxy for outgoing HTTP connections# ATTENTION: If you configure a proxy, make sure to also configure the "http_non_proxy_hosts" option so internal#            HTTP connections with other nodes does not go through the proxy.# Examples:#   - http://proxy.example.com:8123#   - http://username:[email protected]:8123#http_proxy_uri =# A list of hosts that should be reached directly, bypassing the configured proxy server.# This is a list of patterns separated by ",". The patterns may start or end with a "*" for wildcards.# Any host matching one of these patterns will be reached through a direct connection instead of through a proxy.# Examples:#   - localhost,127.0.0.1#   - 10.0.*,*.example.com#http_non_proxy_hosts =# Disable the optimization of Elasticsearch indices after index cycling. This may take some load from Elasticsearch# on heavily used systems with large indices, but it will decrease search performance. The default is to optimize# cycled indices.## ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.#disable_index_optimization = true# Optimize the index down to <= index_optimization_max_num_segments. A higher number may take some load from Elasticsearch# on heavily used systems with large indices, but it will decrease search performance. The default is 1.## ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.#index_optimization_max_num_segments = 1# The threshold of the garbage collection runs. If GC runs take longer than this threshold, a system notification# will be generated to warn the administrator about possible problems with the system. Default is 1 second.#gc_warning_threshold = 1s# Connection timeout for a configured LDAP server (e. g. ActiveDirectory) in milliseconds.#ldap_connection_timeout = 2000# Disable the use of SIGAR for collecting system stats#disable_sigar = false# The default cache time for dashboard widgets. (Default: 10 seconds, minimum: 1 second)#dashboard_widget_default_cache_time = 10s# For some cluster-related REST requests, the node must query all other nodes in the cluster. This is the maximum number# of threads available for this. Increase it, if '/cluster/*' requests take long to complete.# Should be http_thread_pool_size * average_cluster_size if you have a high number of concurrent users.proxied_requests_thread_pool_size = 32# The server is writing processing status information to the database on a regular basis. This setting controls how# often the data is written to the database.# Default: 1s (cannot be less than 1s)#processing_status_persist_interval = 1s# Configures the threshold for detecting outdated processing status records. Any records that haven't been updated# in the configured threshold will be ignored.# Default: 1m (one minute)#processing_status_update_threshold = 1m# Configures the journal write rate threshold for selecting processing status records. Any records that have a lower# one minute rate than the configured value might be ignored. (dependent on number of messages in the journal)# Default: 1#processing_status_journal_write_rate_threshold = 1# Configures the prefix used for graylog event indices# Default: gl-events#default_events_index_prefix = gl-events# Configures the prefix used for graylog system event indices# Default: gl-system-events#default_system_events_index_prefix = gl-system-events# Automatically load content packs in "content_packs_dir" on the first start of Graylog.#content_packs_loader_enabled = false# The directory which contains content packs which should be loaded on the first start of Graylog.#content_packs_dir = /usr/share/graylog/data/contentpacks# A comma-separated list of content packs (files in "content_packs_dir") which should be applied on# the first start of Graylog.# Default: empty#content_packs_auto_install = grok-patterns.json# The allowed TLS protocols for system wide TLS enabled servers. (e.g. message inputs, http interface)# Setting this to an empty value, leaves it up to system libraries and the used JDK to chose a default.# Default: TLSv1.2,TLSv1.3  (might be automatically adjusted to protocols supported by the JDK)#enabled_tls_protocols= TLSv1.2,TLSv1.3
  • 创建log4j2.xml文件
                                                                                                                                                                                                                            
  • 创建docker-compose_graylog.yml文件
version: '2'services:  # MongoDB: https://hub.docker.com/_/mongo/  mongodb:    container_name: mongo    image: mongo:3    volumes:      - mongo_data:/data/db  # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/6.x/docker.html  elasticsearch:    container_name: es    image: docker.elastic.co/elasticsearch/elasticsearch:7.13.2    volumes:      - es_data:/usr/share/elasticsearch/data    environment:      - TZ=Asia/Shanghai      - http.host=0.0.0.0      - transport.host=localhost      - network.host=0.0.0.0      - "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"    ulimits:      memlock:        soft: -1        hard: -1    mem_limit: 4g  # Graylog: https://hub.docker.com/r/graylog/graylog/  graylog:    container_name: graylog    image: graylog/graylog:4.1    volumes:      - graylog_journal:/usr/share/graylog/data/journal      - ./graylog/config:/usr/share/graylog/data/config    environment:      # CHANGE ME (must be at least 16 characters)!      - GRAYLOG_PASSWORD_SECRET=somepasswordpepper      # Password: admin      - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918      #- GRAYLOG_HTTP_EXTERNAL_URI=http://1.1.1.1:9000/ #这里配置公网访问地址,可注释.      - TZ=Asia/Shanghai    links:      - mongodb:mongo      - elasticsearch    depends_on:      - mongodb      - elasticsearch    ports:      # Graylog web interface and REST API      - 9000:9000      # Syslog TCP      - 1514:1514      # Syslog UDP      - 1514:1514/udp      # GELF TCP      - 12201:12201      # GELF UDP      - 12201-12205:12201-12205/udp# Volumes for persisting data, see https://docs.docker.com/engine/admin/volumes/volumes/volumes:  mongo_data:    driver: local  es_data:    driver: local  graylog_journal:    driver: local
  • 安装
[root@localhost graylog]# docker-compose -f docker-compose_graylog.yml up -dCreating network "graylog_default" with the default driverCreating volume "graylog_mongo_data" with local driverCreating volume "graylog_es_data" with local driverCreating volume "graylog_graylog_journal" with local driverPulling mongodb (mongo:3)...
  • 成功后浏览器访问http://192.168.9.140:9000/system/inputs并创建input
配置input
配置input
  • 发送数据
[root@localhost ~]# curl -XPOST http://127.0.0.1:12201/gelf -p0 -d '{"message":"hello Tinywan222", "host":"127.0.0.1", "facility":"test", "topic": "meme"}'
数据展示

https://www.cnblogs.com/tinywan/p/13378714.html

https://www.cnblogs.com/jonnyan/p/12566994.html

容器平台技术

容器平台技术
  • 所谓编排(orchestration),通常包括容器管理、调度、集群定义和服务发现等。通过容器编排引擎,容器被有机地组合成微服务应用,实现业务需求。
  • 容器管理平台是架构在容器编排引擎之上的一个更为通用的平台。通常容器管理平台能够支持多种编排引擎,抽象了编排引擎的底层实现细节,为用户提供更方便的功能,比如application catalog和一键应用部署等。
  • 基于容器的PaaS为微服务应用开发人员和公司提供了开发、部署和管理应用的平台,使用户不必关心底层基础设施而专注于应用的开发。

容器支持技术

容器支持技术
  • 容器的出现使网络拓扑变得更加动态和复杂。用户需要专门的解决方案来管理容器与容器、容器与其他实体之间的连通性和隔离性。
  • 动态变化是微服务应用的一大特点。当负载增加时,集群会自动创建新的容器;负载减小,多余的容器会被销毁。容器也会根据host的资源使用情况在不同host中迁移,容器的IP和端口也会随之发生变化。
  • 监控对于基础架构非常重要,而容器的动态特征对监控提出更多挑战。
  • 容器经常会在不同的host之间迁移,如何保证持久化数据也能够动态迁移,是Rex-Ray这类数据管理工具提供的能力。
  • 日志为问题排查和事件管理提供了重要依据。
  • 对于年轻的容器,安全性一直是业界争论的焦点,OpenSCAP是一种容器安全工具。

Docker加速器

  • daocloud.io
  • aliyun
sudo tee /etc/docker/daemon.json <<-'EOF'{  "registry-mirrors": ["https://自己的阿里云镜像加速字符串.mirror.aliyuncs.com"]}EOF
sudo systemctl daemon-reloadsudo systemctl restart docker

问题

WARNING: Found orphan containers

  • 问题:docker-compose启动容器报以下错误
    • WARNING: Found orphan containers (prometheus, grafana) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
  • 原因:如果将docker-compose的镜像的配置放在同一个目录下时,docker运行时生成的镜像实例会有相同的前缀,就是当前的目录名,也就是说默认相同前缀的是同一组实例,当你在当前目录下还有别的镜像的配置文件,在运行时就会出现上述警告

  • 解决方法

    • 1、在启动时重命名实例
    docker-compose -p node_exporter -f docker-compose_node-exporter.yml up -d
    
    • 2.或者将文件放在不同的目录下运行

私有仓库上传镜像

  • 现象:上传镜像提示
[root@localhost docker]# docker pull 192.168.9.140:5000/wholegale39/tomcat:latestError response from daemon: Get https://192.168.9.140:5000/v2/: http: server gave HTTP response to HTTPS client
  • 原因:Docker默认不允许非HTTPS方式推送镜像
  • 解决办法:daemon.json增加配置项,重启docker服务再次上传即可
vim /etc/docker/daemon.json{  "registry-mirrors": ["https://dnw6qtuv.mirror.aliyuncs.com"],  "insecure-registries":["192.168.9.140:5000"]}
[root@localhost docker]# systemctl restart docker

参考书籍

Docker技术入门与实战(第3版)

Docker容器技术与高可用实战

Docker:容器与容器云(第2版)

Docker进阶与实战

循序渐进学Docker

深入浅出Docker

每天5分钟玩转Docker容器技术

你可能感兴趣的:(Docker再学习笔记)