Container networking allows containers to communicate with other containers or hosts to share resources and data.
Modern container networking aims to standardize and optimize container data flows, creating zones of isolation that allow large numbers of containers to communicate in an efficient and secure manner. Several standards have been proposed to govern container networking, including the Container Network Model (CNM) and Container Network Interface (CNI).
The following network models are used by popular container platforms such as Docker and Kubernetes:
This guide is part of our series of articles about Kubernetes networking.
In this article:
A network plugin or driver manages network interfaces and connectivity between containers and the network. A plugin assigns IP addresses to the containers’ network interfaces. Container networking standards provide a well-defined interface or API that establishes communication between container runtimes and network plugins.
There are various container networking standards available, which enable you to decouple networking from the container runtime. Below are two standards you can use to configure network interfaces for Linux containers.
The Container Network Model (CNM) is a standard proposed by Docker. It has been adopted by many projects, such as libnetwork, and provides integrations to various products, including Project Calico (Calico Open Source), Cisco Contiv, Open Virtual Networking (OVN), VMware, Weave, and Kuryr.
Here are key features of libnetwork made possible via the implementation of CNM:
The Container Network Interface (CNI) is a standard proposed by CoreOS. It was created as a minimal specification that works as a simple contract between network plugins and the container runtime. CNI whas been adopted by many projects, including Apache Mesos, Kubernetes, and rkt.
Here are key characteristics of CNI:
Related content: Read our guide to Kubernetes CNI
This container networking type means the container receives a network stack. The container does not have an external network interface but receives a loopback interface. Docker and rkt employ similar behavior when minimal or no networking is used. You can use this model to test containers, assign containers without external communication, and stage containers for future network connections.
A Linux bridge provides the internal host network that enables communication between containers on the same host. Bridge networking employs iptables for NAT and port mapping to provide single-host networking. It is the default Docker network type (docker0).
This networking type allows a newly created container to share its network namespace with the host. It provides higher performance, almost at the speed of bare metal networking, and eliminates the need for NAT. However, the downside of this approach is that it can lead to port conflicts. The container has access to the host’s network interfaces. However, unless the container is deployed in privilege mode, it may not reconfigure the host’s network stack.
An overlay delivers communications across hosts via networking tunnels, allowing containers on different hosts to behave as though they were on a single machine. Containers connected to different overlay networks cannot communicate—this enables network segmentation.
There are various tunneling technologies. The technology used in Docker libnetwork, for example, is the virtual extensible local area network (VXLAN). Each cloud provider tunnel type creates a dedicated route for each VPC or account. Public cloud support is essential for overlay drivers. Overlays are suitable for hybrid clouds, providing scalability and redundancy without opening public ports.
An underlay network driver exposes host interfaces to VMs or containers running on the host. Examples of underlay drivers include MACvlan and IPvlan. Underlays are simpler and more efficient than bridge networking; they don’t require port mapping and are easier to work with than overlays.
Underlays are especially suited to handling on-premise workloads, traffic prioritization, security, compliance, and brownfield use cases. Underlay networking eliminates the need for separate bridges for each VLAN.
Related content: Read our guide to networking concepts
A major challenge for container management involves using network data to instrument, monitor, and document network performance—known as network performance management (NPM). You can use virtual or physical taps for virtual machines or physical workloads to generate network data and capture packets.
Virtual or physical server applications are likely to be clearly defined and stationary. Managing containers with dynamic or ephemeral instances can be more complicated. You need to ensure the application is instrumented correctly, and it’s not possible to route traffic to container instances that no longer exist. You also need to retain traffic data for compliance purposes, which may be more difficult in a containerized environment.
NPM providers relying on network data let you instrument containerized environments in various ways, including:
The captured traffic must relate to the data identifying the cluster, pod, and container that created the data. You can achieve this by attaching container management system tags to the related data for containers that have stopped running. This approach allows you to view and manage your data in context.
Continuous monitoring is a critical capability for DevOps and cloud-native deployments. Most organizations purchase new tools from familiar vendors to address the monitoring requirements of their cloud-native applications. However, some organizations opt for new vendors, indicating that the cloud native market remains fertile ground for new vendors releasing specially designed systems for cloud-native technologies.
Read our whitepaper: Definitive guide to container networking, security, and troubleshooting
Calico’s flexible modular architecture supports a wide range of deployment options, so you can select the best networking approach for your specific environment and needs. This includes the ability to run with a variety of CNI plugins and also leverage Calico’s IPAM capabilities and underlying network types, in non-overlay or overlay modes, with or without BGP.
Calico’s flexible modular architecture for networking includes the following:
In addition to providing both network and IPAM plugins, Calico also integrates with a number of other third-party CNI plugins and cloud provider integrations, including Amazon VPC CNI, Azure CNI, Azure cloud provider, Google cloud provider, host local IPAM, and Flannel.
Next steps: