docker容器自动化部署(一)

1、docker容器暴露多个端口

  To expose just one port, this is what you need to do:

  docker run -p :

 

  To expose multiple ports, simply provide multiple -p arguments:

  docker run -p : -p :

  或者你可以直接桥接网络,然后在dockerfile上面直接expose你所需要的端口,这样可以免去-p参数

 

2、gitlab-runner自动化部署

  3.1、注册runner

    命令:sudo gitlab-ci-multi-runner register
       

➜  ~ sudo gitlab-ci-multi-runner register
Running in system-mode.                            
                                                   
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
http://193.188.2.40/
Please enter the gitlab-ci token for this runner:
Xyx8EEd4YeaMSLv7Snh7
Please enter the gitlab-ci description for this runner:
[sti-DL]: sdk_AutoDeploy
Please enter the gitlab-ci tags for this runner (comma separated):
sdk_AutoDeploy
Whether to run untagged builds [true/false]:
[false]: true
Whether to lock Runner to current project [true/false]:
[false]: true
Registering runner... succeeded                     runner=Xyx8EEd4
Please enter the executor: shell, docker+machine, docker-ssh+machine, docker-ssh, parallels, ssh, virtualbox, kubernetes, docker:
docker
Please enter the default Docker image (e.g. ruby:2.1):
dockerName:v1.0.0
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded! 

     

    3.2、docker 挂载宿主的GPU驱动、cuda设备文件

  通过gitlab-ci-multi-runner register注册的Runner配置会存储在/etc/gitlab-runner/config.toml配置文件中,项目中的SDK需要调用英伟达显卡驱动和cuda,通过修改配置文件使得gitlab-runner启动容器时自动加载驱动和cuda。具体配置如下:

  ...
  1)、gitlab-runner启动容器时挂载宿主机驱动的方法

  .gitlab-ci.yml文件 gitlab-runner启动容器默认配置文件是不挂载宿主机上的NVIDIA驱动的,若想gitlab-ci启动容器挂载驱动,可以通过下面的方法:

  修改在*****目录下的config.toml配置文件。
如下:

[[runners]]
  name = "my-runner2" url = "http://182.158.10.30/" token = "d67392c2aa868f95ddd0256eb42b1b" executor = "docker" [runners.docker] tls_verify = false image = "127.0.0.1:5000/rs-sdk-test" privileged = false devices = ["/dev/nvidia0:/dev/nvidia0","/dev/nvidiactl:/dev/nvidiactl","/dev/nvidia-uvm","/dev/mem:/dev/mem"] disable_cache = false volumes = ["/cache"] shm_size = 0 [runners.cache]


  主要是添加devices一行,gitlabrunner启动的容器就会自动挂载宿主机上的NVIDIA驱动。

  2)、在终端上启动容器挂载驱动和cuda的方法

    sudo docker run -it --name NAME -v /home/:/mnt/home --privileged=true --device /dev/nvidia-uvm:/dev/nvidia-uvm --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl myconda:cuda bash
    sudo docker run --privileged=true --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm --device /dev/mem:/dev/mem -it 127.0.0.1:5000/rs-sdk-test:latest

  3.3、在master分支中添加.gitlab-ci.yml

    .gitlab-ci.yml文件的格式有特定的语法规则,详细语法见官方文档。本项目中的.yml文件如下:

    

image: localhost:5000/***:v1.0.0

before_script:
  - echo "hello world!"
stages:
  - build
  - test

job1:
  stage: build
  script:
    - nvidia-smi
    - cat /usr/local/cuda/version.txt
    - export OpenCV_DIR=/opt/ros/kinetic/include/opencv-3.3.1-dev
    - source /opt/ros/kinetic/setup.bash
    - cd /builds/rs_ws_test/
    - ls
    # - sh /builds/rs_ws_test/rs_perception/compileRos.sh
    - cd ~/PerceptionSDK/rs_ws_2.2.0/src
    - ls
    - rm -rf ~/PerceptionSDK/rs_ws_2.2.0/src/rs_perception
    - ls
    # - gnome-terminal -x ~/PerceptionSDK/rs_ws_2.2.0/roscore
    # - gnome-terminal -x ~/PerceptionSDK/rs_ws_2.2.0/rosparam list
    # - gnome-terminal -x ~/PerceptionSDK/rs_ws_2.2.0/rosparam get/rosdistro
    - mv /builds/rs_ws_test/rs_perception ~/PerceptionSDK/rs_ws_2.2.0/src/
    - mv ~/PerceptionSDK/rs_ws_2.2.0/src/rs_perception/tensorRT ~/PerceptionSDK/rs_ws_2.2.0/src/rs_perception/tensorRT10
    - mv ~/PerceptionSDK/rs_ws_2.2.0/src/rs_perception/tensorRT9 ~/PerceptionSDK/rs_ws_2.2.0/src/rs_perception/tensorRT
    - cd ~/PerceptionSDK/rs_ws_2.2.0
    - catkin_make
    - echo "hello world!"
  tags:
    - perception-runner

 

  3.4、写完.yaml配置文件后遇到下面这个问题

docker容器自动化部署(一)_第1张图片

    解决方法:搞定在写。。。

 

  

 

转载于:https://www.cnblogs.com/kerngeeksund/p/10552561.html

你可能感兴趣的:(docker容器自动化部署(一))