记录一下ubuntu18.04环境下配置autoware的过程

docker安装,无需安装cuda!!!亲测可行!

至于docker的安装,推荐资料docker_practice.pdf,描述简洁,按需安装即可。

1、安装ubuntu18.04,修改更新源。

2、ubuntu输入法、小工具安装,英伟达显卡驱动更新。

3、安装docker,修改docker更新源。

4、更改docker用户权限。

5、安装英伟达的docker。网址https://github.com/NVIDIA/nvidia-docker/wiki/Installation-(version-2.0)

5.1首先需要配置依赖库。网址https://nvidia.github.io/nvidia-docker/

里面支持ubuntu18.04,但是安装命令里面没有ubuntu,需要使用debian的。

    命令1:

curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
  sudo apt-key add -

  命令2:

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)

 命令3:

curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-docker.list

命令4:

sudo apt-get update

5.2开始安装

sudo apt-get install nvidia-docker2
sudo pkill -SIGHUP dockerd

过程中会出现配置文件 '/etc/docker/daemon.json'
 ==> 系统中的这个文件或者是由您创建的,或者是由脚本建立的。
 ==> 软件包维护者所提供的软件包中也包含了该文件。
   您现在希望如何处理呢? 您有以下几个选择:
    Y 或 I  :安装软件包维护者所提供的版本
    N 或 O  :保留您原来安装的版本
      D     :显示两者的区别
      Z     :把当前进程切换到后台,然后查看现在的具体情况

这个是更新的docker源的内容,自己最好保存一下,过后再添加进去。

5.3安装完毕,需要验证安装

sudo docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi

结果报错:

docker: Error response from daemon: Unknown runtime specified nvidia.
See 'docker run --help'.


经过搜索,是nvidia-docker 没有注册。但是已经输入过sudo pkill -SIGHUP dockerd(这个命令是重新载入daemon.json)。经检查,在输入该命令之后,更改过docker源头,所以再次运行该命令,之后再次运行5.3中命令,运行正常。

5.4nvidia-docker2成功的输出

Digest: sha256:4e5be8905f77e239e561c55246d9d90301920cf9232a000860a42f71ee862450
Status: Downloaded newer image for nvidia/cuda:latest
Mon Jul 29 14:28:44 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.26       Driver Version: 430.26       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  TITAN Xp            Off  | 00000000:01:00.0  On |                  N/A |
| 23%   37C    P8    10W / 250W |    253MiB / 12193MiB |      1%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

 

6下载并运行autoware相关的image

去docker hub官网搜索autoware,找到很多版本,在Tag中寻找我们需要的,这里用了1.12.0-melodic-cuda。1.12.0是大版本号,melodic对应ros版本,cuda说明支持cuda。另外有1.12.0-melodic-base-cuda版本,base应该是表示未built。这里我

docker pull autoware/autoware:1.12.0-melodic-cuda

docker pull autoware/autoware:1.12.0-melodic-base-cuda

两个都pull了,家里网速好。
PS:如果要更改image的存储路径,要在这个之前。

https://gitlab.com/autowarefoundation/autoware.ai/autoware/wikis/Generic-x86-Docker

上面为官方教程网址,建议参考阅读,原汁原味,无错误。

之后需要启动image文件,直接输入命令

$ git clone https://gitlab.com/autowarefoundation/autoware.ai/docker.git

$ cd docker/generic

进入generic后,修改run.sh到当前版本。里面很长,修改image的版本即可。

修改之后,运行命令:

./run.sh

即可完成autoware的image的运行。换版本,只需更改run.sh,比源码编译安装方便太多。

官网也有-base的image的build方法,这里就不赘述了,再强调一遍官网地址

https://gitlab.com/autowarefoundation/autoware.ai/autoware/wikis/Generic-x86-Docker

6.1关于autoware中使用的数据如何与host交互

1.12.0之后的版本,container和host之间的文件交互已经更换方式。在container和host中分别存在shared_dir文件夹,里面可以存放公用的数据等,但是存储空间仍然不清楚。所以,硬盘大一些就ok了。

6.2 -base image的安装方法

$ git clone https://gitlab.com/autowarefoundation/autoware.ai/docker.git
$ cd docker/generic

只不过需要更改build.sh,然后运行

./build.sh

运行

$ ./run.sh -t local

进入你创建的image。

 

 

 

7、autoware中关于docker的要求。

# Autoware Docker

 

Docker can be used to allow developers to quickly get a development environment

ready to try and develop Autoware.

 

There are two sets of Docker images for Autoware:

* **Base image** - Provides a development container with all the dependencies to

build and run Autoware. When starting a container using this image, the Autoware

source code is mounted as a volume allowing users to develop and build Autoware.

Base images have the label *-base* in their names.

 

* **Pre-built Autoware** - Provides a container with a copy of Autoware

pre-built. This image is built on top of the base image.

 

Each set of Docker images comes with and without Cuda support. Images with Cuda

support have the label *-cuda* in their names.

 

This set of Dockerfiles can be used to build and run containers natively on both

AArch64 and x86_64 systems.

 

## Requirements

 

* Recent version of [Docker CE](https://docs.docker.com/install/linux/docker-ce/ubuntu/)

* [NVIDIA Docker v2](https://github.com/NVIDIA/nvidia-docker) if your system

has Cuda support

 

## How to build

 

To build the docker image(s), use the **build.sh** script. For details on the

parameters available, try:

```

$ ./build.sh --help

```

 

## How to run

 

To start a container use the **run.sh** tool. Which container will start

depends on the parameters passed to **run.sh**.

 

### Examples of usage:

 

```

$ ./run.sh

```

Will start a container with pre-built Autoware and CUDA support enabled. This

image is useful for people trying out Autoware without having to install any

dependencies or build the project themselves. Default image:

_autoware/autoware:latest-kinetic-cuda_

 

```

$ ./run.sh --base

```

Will start a container with the base image (without pre-built Autoware). The

container will have Cuda enabled and the Autoware code base you are running

**run.sh** from will be mounted as a volume on the container under

_/home/autoware/Autoware_. This is the suggested image for developers using

Docker as their development environment. Default docker image:

_autoware/autoware:latest-kinetic-base-cuda_

 

```

$ ./run.sh --base --cuda off

```

Same as previous example, but Cuda support is disabled. This is useful if you

are running on a machine without Cuda support. Note that packages that require

Cuda will not be built or will execute on CPU. Default image:

_autoware/autoware:latest-kinetic-base_

 

```

./run.sh --tag-prefix local --base

```

Will start a container with the tag prefix _local_. Note that _local_ is the

default tag prefix when using the **build.sh** tool. Image name:

_autoware/autoware:local-kinetic-base-cuda_

 

For details on all parameters available and their default value, try:

```

$ ./run.sh --help

```

 

## Notes

 

* The default values for the **--image** and **--tag-prefix**

parameters in build.sh and run.sh are different. This is because run.sh defaults

to values used to retrieve images from Docker Hub. When running containers from

images you have built, make sure the parameters mentioned match.

 

* Containers started with the **run.sh** tool are automatically removed upon

exiting the container. Make sure you use the _shared_dir_ to save any data you

want to preserve.

8.未完待续,或者新开一个文章,或者在这里继续补充

你可能感兴趣的:(ROS,Autoware,docker,nvidia-docker)