https://containerjournal.com/2015/10/22/docker-docker-docker-security-docker/
At this year’s Lonestar Application Security Conference I was asked to speak about Docker. While many claim “Docker Security” is oxymoronic, the research shows a more interesting picture.
Before considering Docker security however, it’s worth considering if a Docker adoption even makes sense.
Why Docker
First assumption: Feature velocity is a primary revenue indicator. The faster features get to production the faster they produce money.
Second Assumption: Innovation leads to higher quality features at lower total cost to produce.
If a retail chain provides a better hammer buying experience on-line, customers buying lumber go to their physical store. Software means wins for every company.
Micro-services Mean Micro-teams.
Andy Pemberton of Cloudbees summed this up nicely at the Continuous Delivery Summit, “Micro-services are not a new concept. It’s just really hard.” Why? The interface between services is an optional code level abstraction. Therefore, it goes undone in the face of outside pressure.
Docker containers provide an infrastructure defined boundary between services. Docker provides slick tools to do the packaging. This makes it very easy to reason about what a “container” is doing as a mnemonic for what a service is actually doing. Micro-services for mere mortals.
If you have 15 services, and break your team into groups of 3 for each service, things get interesting:
- Feature velocity increases drastically. Each service moves at its own pace.
- Communication drops drastically. Communication is only between the teams because:
- Small teams of 3 to 5 engineers are self organizing.
- Communication occurs only when breaking changes to external contracts exist
Process Density
In 2007, the EPA put Data Center power usage at 1.5% of US energy production. In 2011 that number was estimated as high as 2.2%.
In 2012, the typical host usage was about 10%. Of course their are the machines, switches, racks etc as well.
Even for a small startup, reducing their AWS spend by over 50% was accomplished via a real world Docker adoption.
And Then …
By now we can see potential increases in revenue and decreases in costs. Docker clearly matters. If you would like more of this sort of thinking and context please read this interview done for The Linux Foundation.
Docker Security Concerns
Here are a few of the many ways to approach security concerns with something as significant as Docker:
Identity Management
The Docker daemon runs as root. When you run any Docker container you can very easily escalate your privileges.
docker run –privileged –entrypoint “rm -rf /root” -v /root:/root:rw stackhub/haproxy
Even though the intent of the stackhub/haproxy is to run the load balancer, the user is able to delete the root user from the host in this simple exploit. Any bad-actor with the ability to execute docker run can do this.
Orchestration tools such as StackEngine address this problem using access control lists through the user interface (API, CLI and GUI). In such products, users are only capable of launching containers “as is” from their images, thus addressing this key security concern.
Image Content Verification
Until Docker 1.8, any docker pull command against a public registry was a crap shoot. Nothing was signed. With the advent of Docker 1.8, the Docker Content Trust feature was rolled out.
Images from a remote registry are now signed and verified. This is a critical step in the Docker Replaces Your Package Manager story. However, be sure to RTFM.
- Image publishers have final discretion on whether or not to sign an image.
- The Docker daemon starts with the DOCKER_CONTENT_TRUST flag disabled.
Docker Daemon vs. the Hypervisor
It has become an ingredient in the Docker Kool-Aid to say that running containers in a VM provides better security due to the presence of a hypervisor. This bit of the religious cannon implies the Docker daemon is somehow less secure than the hypervisor.
It’s an idea worth exploring. So let’s take a look at Docker, Xen and KVM.
Venom
The Venom exploit demonstrates that hypervisors are not perfect. This led me to ask, “How complex are these things?” So I went to www.openhub.net and built a simple complexity model.
Bald Reality – Battle Hardening
Battle hardening is a key consideration for any infrastructure tooling such as a hypervisor, web server or data store. Simply put, Docker is not in the same ballpark as its rivals in this area. Your consideration of Docker is tantamount to your agreement to be part of the hardening process.
The inception dates:
- Docker 2013
- Xen 2003
- KVM 2005
If you can find the right place to try Docker, “Welcome to the party, drinks are on the left.”
Jez Humble, also at the CD Summit in Austin, introduced us to the idea of the strangler pattern. Rather than a wholesale change, pull out bits of functionality as a service. Dockerize it. Learn from it. Move on to the next.
Lines of Code as Complexity and Surface Area
Each line of code represents risk. Consider the number of lines in each code base:
- Docker ~= 300k
- Xen ~= 500k
- KVM ~= 13,500k
Code Churn
In July of 2015 the Docker code base went from ~1% C to 38% C in a couple of months. That is an enormous block of new code in a new language!
Looking at Xen, most of its huge additions happened back in 2005 to 2007. This is a battle hardened system … or is it? Xen has a decidedly saw-tooth look to it with a period of 2 years. This is strong visual evidence there are periods where a large percentage of code is new.
It is interesting that the KVM code base grows from ~4m lines in 2005 to the 13.5m lines we see in 2015. It is also shockingly linear growth. How much bit-rot is in there?
Looking at life time trends is not fair to any of these technologies by way of comparison. Here is the average month of activity in the code base over the last year.
- Docker – 627 commits per month
- Xen – 204 commits per month
- KVM – 5894 commits per month
Number of Hands in the Code
The larger the number of committers, the more mature the processes must be to support a project. KVM has a full order of magnitude more contributors involved over the last year.
- Docker – 634 Contributors
- Xen – 116 Contributors
- KVM – 3580 Contributors
Conclusions about Comparisons to Hypervisors
Docker is a new project and should be viewed as high risk. However, its rate of churn and complexity are orders of magnitude less than the venerable KVM.
Docker is worth considering in low risk production situations.
Docker does not need a hypervisor as a “baby sitter.”
Get Relevant – Black Box Testing
Ops is in the way. It is a crushing bottle neck that is strangling the business. Ops is a cost center. Ops provides no business value. Ops should be outsourced.
Ops was reviled until there was DevOps. And slowly, our image is being rehabilitated.
In the above two paragraphs substitute Security for Ops. How can this happen? How can Security become part of the solution instead of a perceived low value impediment to business agility?
DevOps.
Consider the evolving role of Ops and Security engineers. Today, our job is to say, “No you cannot do it like that.” We are traffic cops. We are in the way. We force teams with delivery pressure to work around us. In the end, nobody wins, least of all those whose data is breached. Worse, we Ops and Sec folks don’t want to be in this situation. We want to make meaningful contributions to progress.
As with micro-services, Docker offers an easier way to reason about cutting edge ideas. Rather than producing lengthy documents about approved content for infrastructure, let’s write tests. Automated tests.
The Human Factor
Me: “You can’t used that library, it’s not vetted.” Dev: “Dude, this saved me 4 weeks of time.” Me: “Its my job if I let that go and we are breached because of it.” Dev: “Its my job if we don’t make this deadline.”
This situation represents all that is wrong with a silo-ed approach to software development. Why am I not collaborating with that developer?
What if that developer had pushed his code a month ago, seen a test fail saying that a SQL Injection attack was possible and learned it was the library he was using?
First, a conflict between humans and departments would not have occurred. The fact that the code is insecure is indisputable. That is a problem that must be solved. Engineers are good as solving problems. Engineers are often not great at dealing with conflict.
Second, in classic Lean thinking, quality (in the form of security) has been moved far to the beginning of the process. The feedback loops to the developers are much shorter and risk is reduced drastically.
Breaches Happen
By moving quality and security as far forward in the SDLC as possible, two key behaviors become possible.
When a breach happens work stops. A team of crack ops and dev engineers figure how the breach occurred and write tests to show it cannot happen again. The application is remediated and the patch released. In a Docker, micro-service world this could happen very rapidly. Sound like test-driven development (TDD). It should. Security and Quality engineers adopting best practices from developers? DevOps.
Some team gets the job of attacking applications. They get to write code to find exploits. They get to wear their black hat to work. One of the micro-teams can be tasked to exploit the services of two other teams. In this way they become familiar with those services (cross training).
Critical to this second way is a celebration of both teams when a flaw is found and when it is fixed. Does this sound like the Netflix Chaos Monkey?
Conclusion
Docker is not as risky as is generally thought. Many of its current flaws serve only to draw attention to the practices of most IT shops today.
Running Docker within a Hypervisor is not a panacea. It adds a layer of vast complexity. Docker can, and should run on bare metal when that makes sense.
Docker should be adopted for low risk production work loads via the Strangler Pattern. So Ops and Security teams can leverage it to the benefit of their businesses.