AWS ECS

https://www.linkedin.com/pulse/tutorial-deploying-your-first-docker-container-aws-ec2-wootton (Good)

Basic

why we need container orchestration?

  • Firstly, we need to cluster our containers for scalability.

  • Secondly, we need to cluster containers for robustness and resilience. When a host or container fails, we want the container to be re-created, perhaps on another healthy host, so the system is not impacted.

  • Finally, tools in the orchestration layer provide an important function of abstracting developers away from underlying machines. In a containerised world, we shouldn't need to care about individual hosts, only that our desired numbers of containers are up and running ‘somewhere appropriate’. Orchestration and clustering tools do this for us, allowing us to simply deploy the container to the cluster, and let the supporting software work out the optimal scheduling of containers onto hosts.

Designing robust and performant distributed clustering systems is notoriously difficult, so tools such as Docker Swarm, Mesos and Kubernetes give us that capability without needing to build it ourselves. ECS takes this one step further and takes away the need to setup, run and administer the orchestration layer. For this reason, ECS is definetly something people developing applications using containers should be looking at closely.

ECS Container Agent

The EC2 servers in your cluster run an ECS agent, which is a simple process which connects from the host into the centralised ECS service. The ECS agent is responsible for registering the host with the ECS service, and handling incoming requests for container deployments or lifecycle events such as requests to start or stop the container.

When creating new servers, we can either configure the ECS instance manually, or use a pre-built AMI which already has it configured.

Through Werners blog, we are told that the centralised service has a logical seperation between the 'directory' of container state (the cluster manager) and the scheduling engine which is controlling the deployments of containers onto hosts. The motivation behind this was to make the scheduling of containers pluggable, so we could eventually use other schedulers such as Mesos or even custom developed container schedulers for more finely grained control of which containers run where.

ECS Task

Task definitions are split into four basic parts: the task family, the IAM task role, container definitions, and volumes. The family is the name of the task, and each family can have multiple revisions. The IAM task role specifies the permissions that containers in the task should have. Container definitions specify which image to use, how much CPU and memory the container are allocated, and many more options. Volumes allow you to share data between containers and even persist the data on the container instance when the containers are no longer running. The family and container definitions are required in a task definition, while volumes are optional.

  • Cluster: a logical grouping of container instances that you can place tasks on.
  • Container Instance: an Amazon EC2 instance that is running the Amazon ECS agent and has been registered into a cluster.
  • Task Definition: a description of an application that contains one or more container definitions.
  • Task: an instantiation of a task definition that is running on a container instance.

ECS Service

Services are how ECS provides resilience When started, the service will monitor that the underlying task is alive and running with the correct number of instances and the correct number of underlying containers. If the task stops or becomes unresponsive or unhealthy, the service will request that more tasks are started in their place, cleaning up as necessary.

Life cycle of using ECS

  • Define task
  • Define service
  • Create ECS cluster
  • Create the Stack

ECS with ELB and AutoScaling

In the example above, we connected directly into one of the three containers. This is not very robust as the container could theoretically die and be re-spawned on a different server, meaning the container specific IP address becomes invalid.

Instead, we can register our services dynamically with EC2 Elastic Load Balancers (ELBs). As the underlying tasks start and stop and move around the pool of EC2 instances, the ELB is kept up to date via the service so traffic is routed accordingly.

ECS integrates with EC2 Autoscaling well, and is currently the preferred way for growing the cluster as it comes under load.

Autoscaling works by monitoring metrics such as CPU, memory and IO, and adding nodes into or removing nodes from the pool as certain conditions are breached.

New nodes generated will automatically register with the ECS cluster and will then be eleigible for new container deployments. Note that by default, new nodes will register into a cluster called default unless you provide scripting

Other concepts

  • container links
  • AWS CLI
  • Logging and troubleshooting

ECS CLI

Tutorial

  • create cluster

ecs-cli up --keypair id_rsa --capability-iam --size 2 --instance-type t2.medium

  • create docker compose file
version: '2'
services:
  wordpress:
    image: wordpress
    cpu_shares: 100
    mem_limit: 524288000
    ports:
      - "80:80"
    links:
      - mysql
  mysql:
    image: mysql
    cpu_shares: 100
    mem_limit: 524288000
    environment:
      MYSQL_ROOT_PASSWORD: password
  • deploy compose file to a cluster (this is like defining a task)

ecs-cli compose --file hello-world.yml --project-name project_name up
ecs-cli compose --file hello-world.yml down

  • view running container on a cluster

ecs-cli ps

  • Scale the tasks on a cluster

ecs-cli compose --file hello-world.yml scale 2

  • Create ECS service from a compose file

ecs-cli compose --file hello-world.yml service up

  • Clean up

ecs-cli compose --file hello-world.yml service rm
ecs-cli down --force

你可能感兴趣的:(AWS ECS)