When it comes to enterprise application development, security is still an afterthought, coming in right before a release is deployed. The rapid adoption of software containers presents a rare opportunity for security to move upstream (or in devops-speak, to facilitate its “shift left”) and become integrated early on and throughout the software delivery pipeline. However, most security teams don’t know what containers are, let alone what their unique security challenges might be.
Software containers can be thought of as lightweight virtual machines with much leaner system requirements. Containers share the host OS kernel during runtime, making them exceptionally light (only megabytes in size) -- and fast. Containers take mere seconds to start, as opposed to a few minutes for spinning up a VM.
Containers have been around since the early 2000s and architected into Linux in 2007. Because of the small footprint and portability of containers, the same hardware can support an exponentially larger number of containers than VMs, dramatically reducing infrastructure costs and enabling more apps to deploy faster.
However, due to usability issues, containers did not catch on until Docker came along and made them more accessible and enterprise-ready. Now containers -- and Docker -- are red hot. Earlier this year, JP Morgan Chase and Bank Mellon publicly stated that they are pursuing a container-based development strategy, proof that containers have as much to offer traditional enterprises as cloud juggernauts like Google, Uber, and Yelp.
As awesome as containers are, they also introduce unique new risks. As is usually the case with new technologies, containers were not inherently architected with security in mind. If containers are not on your radar, now’s the time to get up to speed because they are probably already deployed somewhere within your organization. I’ve outlined five of the unique cybersecurity issues that come into play with containers below.
1. Managing vulnerabilities in container images
Images are the basic building blocks for containers. Developers can easily create their own images, or they can download public images from the Docker Hub and other centralized open source registries, making the use of containers a highly automated, flexible process.
From a security and governance perspective, trusting the container image is a critical concern throughout the software development lifecycle. Ensuring that images are signed and originate from a trusted registry are solid security best practices. Still, keeping to those practices doesn’t resolve the core challenge of vetting and validating the code.
In containerized environments, images are constantly added to the organization’s private registry or hub, and containers running the images are spun up and taken down. Even if an image has vulnerability information listed, it is rarely presented in a manner that dev teams can place in the context of their organization’s security practices and polices. For example, let’s say developers pull an image from a registry with 1,000 vulnerabilities. That number in and of itself has no actionable context. How many of those vulnerabilities matter? Why?
Amplifying the scale of the problem is the relative ease with which images based on open source builds can be generated, especially the ease with which more “layers” can be incorporated into the image. The more layers that are incorporated in the image build to speed up deployment, the greater the risk that a software component, including open source components, will find its way into production without being scanned and validated or patched.
We have seen cases where containerization initiatives had (rightly) been set back or even shelved because the organization did not have a container vulnerability assessment program in place. A continuous vulnerability assessment and remediation program needs to be an integral part of an organization’s IT risk and governance program.
2. Reducing the container attack surface
Reducing the attack surface is a basic tenet of security. Preventing code with vulnerabilities from entering into the environment is a perfect example of reducing a key attack surface, but containerization has specific structural and operational elements that require special attention. Mainly, the underlying shared kernel architecture of containers requires attention beyond securing the host; it requires maintaining standard configurations and container profiles.
Unlike in virtualized environments, where a hypervisor serves as a point of control, any user or service with access to the kernel root account is able to see and access all containers sharing the Linux kernel. Security teams can rely on proven approaches to harden kernels and hosts, but they have far less mature and repeatable approaches to securing processes specific to a container environment.
Many of these processes are intrinsic to containerization. For instance, the container itself relies on the kernel as well as the Docker daemon for a range of services accessed via system calls. While Docker has made significant improvements in the ability to invoke out-of-the-box Seccomp (secure computing mode) profiles, these profiles disable only 52 system calls by default, out of an available 313 on x64 machines, leaving some 260 system calls still open.
Another example is the ability to bind the Docker daemon to the Unix Docker access group or the TCP port that allows containers to speak to each other, but also has the effect of providing all users with root access. Open access to root reduces operational friction but is likely to have security departments fuming about violations of the least-privilege-access principle.
Resolving this inherent tension between isolation and the need for container communication, operations, and development means taking steps both to control the extent to which containers interact with each other internally, and to limit the number of containers that are accessible to Docker groups through sockets or open ports.
3. Tightening user access control
Until fairly recently, root access to the Docker host was by default an all-or-nothing proposition, generating plenty of anxiety for security professionals. Although constraining access to the container host root account has consumed the most attention -- and driven investment by Docker in new features that systematically remove privileged access -- the broader concern for security is enforcing access controls to privileged accounts and operations for the deployment pipeline. There are clear benefits for the broader organization in creating pragmatic and effective access controls: accountability and operational consistency.
Accountability entails some ability to pinpoint who made changes to container settings or configurations or downloaded an image or started a container in production. With generic root access in place, identifying who made changes is practically impossible. Although root access may be the easiest way of giving developers the access they need to get the job done, it can also mean that they have too much access. Also, an attacker who gains access to root account will have full access to the container, including its data and programs.
Applying centrally managed constraints on what changes or commands a user can execute based on their role, rather than their ability to access the root account, enables organizations to define and enforce standard processes. Implementing separation of duty and privileged access and command constraints based on user role is a foundation for assurance through the software development lifecycle.
Without a centralized approach, it’s difficult to determine whether the different privileges defined for different users for each container are in fact appropriate and consistent with their functional role and scoped in terms of least-privilege access.
4. Hardening the host
One of the key benefits of containerization is that it isolates an application and its dependencies within a self-contained unit that can run anywhere.
A critical implication is that there are tools in place to constrain what the self-contained unit can and can’t access and consume. Control groups and namespaces are the key container isolation components. Control groups define how much of the shared kernel and system resources a container can consume. Namespaces define what a container can “see” or effectively determine which resources the container is authorized to access. The design goals for these components are clear: Wherever you want to run multiple services on a server, it is essential to security and stability that the services are as isolated from each other as possible.
The devil in the details is ensuring that control groups and namespaces are appropriately and consistently configured, and configurations are congruent with security policies.
Although control group and namespace isolation can be used to limit a container’s access to kernel resources, it’s not as effective in isolating the container’s execution path. Resource isolation is also not effective for detecting or preventing escalation attacks that abuse privileges or break out of the container “sandbox.”
In the absence of a layered approach with effective controls and visibility for runtime defense and container profiling, the security of containers can easily be compromised through misconfiguration or through explicit actions by attackers through namespace manipulations. For instance, a denial-of-service attack on a containerized environment is not that dissimilar from a “rogue” container consuming more kernel resources and crowding out other processes.
5. Automating the container security process
In security circles, the idea of baking security into operational processes -- as opposed to bolting it on afterward -- is the Holy Grail. Despite any territorial or cultural divisions that exist between devops and security teams, baking security into containers as they are built, shipped, and run is undoubtedly in the organization’s best interest. It not only leads to inherently more secure applications, it aligns the motivations of devops and security teams, fostering a more collaborative culture.
Because security teams are often unaware of the processes that lead to containers running in production, it is important to involve them in the definition of workflows and facilitate a knowledge transfer. Thus, they can provide guidelines for appropriate controls and practices required to meet security standards and pass compliance audits.
Devops, on the other hand, should do what they do best: automation. Container-based application development processes are already heavily automated. Using CI/CD and orchestration tools to embed security best practices throughout the container lifecycle would make the process of establishing the security governance framework both transparent and relatively painless. It would establish a high security baseline, reducing the need for subsequent security efforts and reducing the likelihood that security will become a barrier to deployment.
We all understand what happens when security is not a priority, so what other choice is there besides rolling it back? Containers provide a great opportunity to do this right because they already have automation in place. Automating security processes into operational workflows may be new to security, but it’s not new to containers, where automation is the norm for all aspects of operations (networking, storage, and so on). Security becomes simply another automation feed.
Devops prescribes the use of automation and a spirit of collaboration to push the speed of agile development “to the right,” across the application lifecycle. Security -- which currently sits as far to the right as anything can without being in production -- can use the same methods to facilitate a shift to the left.
The biggest challenge facing security pros: They are likely not aware that container deployments are planned or perhaps even in process. It serves security teams to get involved sooner rather than later and bake security into the container-driven devops process -- rather than waiting until there’s pressure to get into production and stalling the process instead of enhancing it. We’ve seen that movie before; no one wants to see it again.
Amir Jerbi is co-founder and CTO of Aqua Security. Prior to Aqua, he was chief architect at CA Technologies in charge of the host-based security product line.
New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to firstname.lastname@example.org.