docker实际操作指令记录------For Mac

[For Linux: add 'sudo' before all the commands]

1.image:

a lightweight, stand-alone, executable package that includes everything needed to run a piece of software , including the code, a runtime, libraries, environnement variables, and config files.

2. container:

- a runtime instance of image

- what the image becomes in memory when actually executed

- runs completely isolated from the host environnement by default

- only access host files and ports

3. begin building an app the Docker way:

Stack:

at the top level, define the interactions of all the services

Service:

defines how containers behave in production

Container:

at the bottom of the hierarchy of such an app executed images

3.1Dockerfile:

Define a container with a Dockerfile :

#use an official Python runtime as a base image

FROM   python:2.7-slim

#Set the working directory to /app

WORKDIR   /app

#Copy the current directory contents into the container at /app

ADD   .    /app

#Install any needed packages specified in requirements.txt

RUN   pip   install   -r    requirements.txt

#Make port 80 available to the world outside this container

EXPOSE   80

#Define environnement variable

ENV   NAME   World

#Run app.py when the container launches

CMD   ["python", "app.py"]

3.2 Build the app

- ls

Dockerfile     app.py    requirements.txt

docker build -t friendlyhello .

* the build command

* this creates a Docker image called "friendlyhello" from the current directory

* tag this created image by "-t"

docker images

Where's your built images? in your machine's local Docker image registry

3.3 Run the app

- run the app, map your machine's port 4000 to the container's exposed port 80 using "-p" :

docker run -p 4000:80 friendlyhello

- http://0.0.0.0:80 : coming from inside the container

- http:// localhost:4000 : the correct URL

- run in detached mode :

docker run -d -p 4000:80 friendlyhello

- see the abbreviatedcontainer IDwith

docker ps

- to end the process, using theCONTAINER ID:

docker stop 1fa4ab2cf395

3.4 Share your image:

- Object :

* upload our build

* run it somewhere else

* learn how to push to registries to make deployment of containers actually happen

- Registry :

* a collection of repositories

- Repository :

* a collection of images

* like a Github repository

* except the code is already built

- An account on a repository : cloud.docker.com

- STEPS :

* docker login

* docker tag friendlyhello username/repository:tag

* docker push username/repository:tag

* docker run -p 4000:80 username/repository:tag

4. Services

- learn how to scale your application by running this container in a service & enable load-balancing

- Prerequisites :

* docker run -p 80:80 username/repo:tag

* ensure your image is working by running this and visiting http://localhost

- Service :

In a distributed application, different pieces of app are called "Services"

For example, a video sharing site:

* a service for storing application data in db

* a service for video transcoding in the background

* a service for the front-end

"Services : containers in production"

A service :

* only runs an image

* codifies the way that images run :

-> what ports it should use

-> how many replicas of the container should run so the service has the capability it needs

*Scaling a service changes the number of containers instances running that piece of software

4.1 define, run and scale services with the Docker platform with file :

docker-compose.yml

a YAML file that defines how docker containers should behave in production

Docker-compose.yml

version: "3"

services:

    web:

        images: username/repository:tag

        deploy:

            replicas: 5

            resources:

                  limits:

                            cpus: "0.1"

                            memory: 50M

                   restart_policy:

                            condition: on-failure

        ports:

                 - "80:80"

        networks:

                 - webnet

networks:

         webnet

* replicas: 5

Run 5 instances of the images we uploaded as a service called web

* condition: on-failure

immediately restart containers if one fails

* "80:80"

the 1st 80 means the port on the host

* the 1st "networks: - webnet "

instruct web's containers to share port 80via a load-balanced network called webnet

* the 2nd "networks: -webnet"

define the webnet network with the default settings (which is aload-balancing overlay network)

4.2 Run your new load-balanced app

STEP 01

docker swarm init

STEP 02

docker stack deploy -c docker-compose.yml getstarted

(give your app a name getstarted)

STEP 03

docker stack ps getstarted

(see a list of five containers you just launched)

STEP 04

curl http://localhost/

4.3 Scale the app

scale the app by changing the replicas values in docker-compose.yml, saving the change, and re-running the "docker stack deploy" command :

docker stack deploy -c docker-compose.yml getstarted

4.4 Take down the app

docker stack rm getstarted

5. Swarm

* In part Services:

- take an app you wrote 

* In part Containers

- define how it should run in production by turning it to a service

- scaling it up 5 x in the process

* In part Swarm:

- deploy this application onto a cluster

- running it on multiple machines

*Multi-container, multi-machine applications are made possible by joining multiple machines into a"dockerized" cluster called a

swarm

* A swarm:

a group of machines that are running Docker and have been joined into a cluster

* run the Docker commands, these commands run on a cluster by a

swarm manager

* Swarm manager:

use strategy to run containers:

-"emptiest mode": which fills the least utilised machines with containers

-"global": which ensures that each machine gets exactly one instance of the specified container

instruct swarm manager to use these strategies in the compose file

* the only machine in a swarm to:

- execute your commands

- authorise other machines to join the swarm as workers

* Worker:

- just provide capacity

- do not have the authority to tell any other machine what it can or cannot do

* Swarm mode:

- Docker work mode:

*a single-host mode on your local machine

* swarm mode

- swarm mode:

* Enabling swarm mode instantly makes the current machine a swarm manager

* Docker executes commands on the swarm, rather than on the current machine.

5.1 Set up your swarm:

- A swarm is made up of multiple nodes, which can be either physical or virtual machines.

- basic concepts:

docker swarm init

to enable swarm node and make your current machine a swarm manager

docker swarm join

(on other machines)

to have them join the swarm as a worker

5.2 Create a cluster

- create a couple of Vos using the Virtualbox driver :

docker-machine

docker-machine create --driver virtualbox myvm1

docker-machine create --driver virtualbox myvm2

(myvm1 - manager; myvm2 - worker)

- send commands to your VMs using

docker-machine ssh

docker-machine ssh myvm1 "docker swarm init"

("docker swarm init" - instruct myvm1 to become a swarm manager)

- to have myvm2 join your new swarm as a worker:

docker-machine ssh myvm2 "docker swarm join --token :"

5.3 Deploy your app on a cluster

- deploy your app on your new swarm

- only swarm manager like myvm1 can execute commands

- workers are just for capacity

- copy the docker-compose file to the swarm manager myvm1 home directory (alias: ~)

docker-machine sap docker-compose.yml myvm1:~

- Now that myvm1 use its powers as a swarm manager to deploy your app

docker-machine ssh myvm1 "docker stack deploy -c docker-compose.yml getstartedlab"

the app is deployed on a cluster

- you'll see the containers have been distributed between both myvm1 and myvm2

docker-machine ssh myvm1 "docker stack ps getstartedlab"

5.4 Accessing your cluster

- To get your VMs' IP addresses

docker-machine ls

- visit either of them on a browser

6. Stack

Swarm Chapitre:

- learn how to set up a swarm

* a swarm: a cluster of machines running Docker

- and deploy an app on a swarm, with containers running in concert on multiple machines

Stack Chapitre:

- A stack:

* the top of the hierarchy of distributed app

*a group of integrated services that share dependencies, and can be orchestrated and scaled together

-A single stack is capable of defining and coordination the functionality of an entire application

- In part Swarm: a single stack of a single service runs on a single host

- In part Stack: a stack of multiple services related to each other and run them on multiple machines

实际操作过程总结

1. /images/php7_fpm_base:  {php7_fpm_base}

* docker build -t bonjourausy_php7_fpm_base .

* docker tag bonjourausy_php7_fpm_base $DOCKER_ID_USER/bonjourausy_php7_fpm_base

* docker push $DOCKER_ID_USER/bonjourausy_php7_fpm_base

* docker images

- php: 7-fpm

- bonjourausy_php7_fpm_base:latest

- amelieykw/bonjourausy_php7_fpm_base:latest

2. /images/app:  {app}

* docker build -t bonjourausy_app .

* docker tag bonjourausy_app $DOCKER_ID_USER/bonjourausy_app

* docker push $DOCKER_ID_USER/bonjourausy_app

* docker images

- bonjourausy_app : latest

- amelieykw/bonjourausy_app : latest

3. /images/nginx:   {webnginx}

* docker build -t bonjourausy_webnginx .

* docker tag bonjourausy_webnginx $DOCKER_ID_USER/bonjourausy_webnginx

* docker push $DOCKER_ID_USER/bonjourausy_webnginx

* docker images

- nginx: latest

- bonjourausy_webnginx: latest

- amelieykw/bonjourausy_webnginx: latest

4. /images/mysql:   {mysql}

* docker build -t bonjourausy_mysql .

* docker tag bonjourausy_mysql $DOCKER_ID_USER/bonjourausy_mysql

* docker push $DOCKER_ID_USER/bonjourausy_mysql

* docker images

- mysql: latest

- bonjourausy_mysql: latest

- amelieykw/bonjourausy_mysql: latest

Docker Compose  VS  Docker Cloud Stack File

1. Docker Compose

* Docker compose runs on localhost or virtual machine

* Prerequistes:

- already install   Docker Engine  or  Docker Compose

* STEP 01: Set Up

- mkdir composetest

- cd composetest

- create a file of app code (like app.py) in this directory

* STEP 02: Create a Dockerfile

-Dockerfile:  to build an image

- create a Dockerfile in the directory of the image

- Dockerfile: to tell all the dependencies that this image needs

FROM    python: 3.4-alpine

# Build an image starting with the python 3.4 image

ADD    .    /code

# Add the current directory into the path /code in the image

WORKDIR     /code

# Set the working directory to /code

RUN    pip    install    -r    requirements.txt

# install the Python dependencies

CMD ["python", "app.py"]

# Set the default command for the container to "python app.py"

requirements.txt

flask

redis

* STEP 03: Define services in a Compose file

docker-compose.yml

version: '2'

services:

      web:

             build: .

             ports:

                        - "5000:5000"

             volumes:

                        - . : /code

      redis:

              image: "redis : alpine"

-build: .

use an image that's build from the Dockerfile in the current directory

-ports: - "5000:5000"

the 1st port 5000 on the host machine

the 2nd port exposed port 5000 on the container

-. : /code

Mounts the project directory on the host to /code inside the container, allowing you to modify the code without having to rebuild the image

* STEP 04: Build and run your app with Compose

docker-compose up -d

- Linux host machine: http://loaclhost:5000

Mac docker machine: http://MACHINE_VM_IP:5000

docker-machine ip MACHINE_VM

- list local images:

docker image ls

- inspect local images:

docker inspect

* STEP 05: Update the application

- Because the application code is mounted into the container using a volume, you can make changes to its code and see the changes instantly, without having to rebuild the image.

- STEPs:

* change the app code in app.py

* refresh the browser to see the changes

* STEP 06: Expriment with some other commands

docker-compose up -d

run your services in the background in the "detached" mode

docker-compose ps

to see what is currently running

docker-compose run web env

allows to run one-off commands for your service

docker-compose stop

to stop your services once you've finished with them

docker-compose down --volumes

to bring everything down, removing the containers entirely, with the down command

--volumes

remove the data volumes used by the Redis container

docker-compose --help

2. Stack file for Docker Cloud

*A stack:  a collection of services that make up an application

*A stack file:

- a file in YAML format that defines one or more services

- similar to docker-compose.yml file for Docker Compose

- but a few extensions

- default name:

docker-cloud.yml

* Manage service stacks:

-Stacks:

a convient way to automatically deploy multiple services that are linked to each other, without needing to define each one separately

-Stack files:

define :

* environnement variables

* deployment tags

* the number of containers

* related environnement-specific config

docker-cloud.yml

lb:

      image: dockercloud/haproxy

      links:

             - web

      ports:

            - "80:80"

      roles:

           - global

web:

         image: dockercloud/quickstart-python

        links:

                 - redis

        target_num_containers: 4

redis:

          image: redis

lb/web/redis:

Each key define in docker-cloud.yml creates a service with that name in Docker Cloud.

* Create a Stack:

- from the web interface

- using CLI:

docker-cloud stack create -f docker-cloud.yml

* Update an existing stack:

- specify an existing stack when you create a service

- later want to add a service to an existing stack

- from the Docker Cloud web interface

- using CLI:

docker-cloud stack update -f docker-cloud.yml (uuid or name)

实际操作过程总结

.../test/ykw_BonjourAUSY

docker-machine ls

docker-machine start BonjourAUSY

docker-machine env BonjourAUSY

eval $(docker-machine env BonjourAUSY)

docker login

docker image ls

docker-compose up -d     # to run docker-compose.yml

docker exec [OPTIONS] CONTAINERS COMMAND [ARG...]

- Run a command in a running container

- docker exec only runs a new command in a running container and not restarted if the container is restarted

docker run --name ubuntu_bash --rm -it ubuntu bash

docker exec -d ubuntu_bash touch /tmp/execWords

# create a new file (/tmp/execWords) inside the running container (ubuntu_bash), in the background

docker exec -it ubuntu_bash bash

# execute an interactive bash shell on the container

docker-compose ps

to see the running local services

docker container ps

to see the local containers

你可能感兴趣的:(docker实际操作指令记录------For Mac)