Guides

Container Networking

Container Networking: What You Should Know

What Is Container Networking?

Container networking allows containers to communicate with other containers or hosts to share resources and data.

Modern container networking aims to standardize and optimize container data flows, creating zones of isolation that allow large numbers of containers to communicate in an efficient and secure manner. Several standards have been proposed to govern container networking, including the Container Network Model (CNM) and Container Network Interface (CNI).

The following network models are used by popular container platforms such as Docker and Kubernetes:

  • None – The simplest mode of networking is a loopback interface in which the container does not communicate with an external network.
  • Bridge – An internal host network that enables communication between containers on the same host.
  • Host – Allows a container to share its network namespace with the host, enabling high-speed networking.
  • Overlays – Ensure that containers connected to two different overlay networks are isolated from each other, and cannot communicate over the local bridge.
  • Underlays – Exposes host interfaces to VMs or containers running on the host.

This guide is part of our series of articles about Kubernetes networking.

In this article:

What Are Container Networking Standards?

A network plugin or driver manages network interfaces and connectivity between containers and the network. A plugin assigns IP addresses to the containers’ network interfaces. Container networking standards provide a well-defined interface or API that establishes communication between container runtimes and network plugins.

There are various container networking standards available, which enable you to decouple networking from the container runtime. Below are two standards you can use to configure network interfaces for Linux containers.

Container Network Model

The Container Network Model (CNM) is a standard proposed by Docker. It has been adopted by many projects, such as libnetwork, and provides integrations to various products, including Project Calico (Calico Open Source), Cisco Contiv, Open Virtual Networking (OVN), VMware, Weave, and Kuryr.

Here are key features of libnetwork made possible via the implementation of CNM:

  • Network sandbox – An isolated environment that contains the container’s network configuration. It works as a networking stack within the container.
  • Endpoint – A network interface built in pairs. It lets you situate one end of the interface in the network sandbox and the other in a designated network. It ensures endpoints join only one network while multiple endpoints exist in one network sandbox.
  • Network – A uniquely identifiable collection of endpoints allowed to communicate with each other.
  • User-defined labels – CNM lets you define labels using the label flag. These labels are passed as metadata between drivers and libnetwork. Labels enable the runtime to inform driver behavior.

Container Network Interface

The Container Network Interface (CNI) is a standard proposed by CoreOS. It was created as a minimal specification that works as a simple contract between network plugins and the container runtime. CNI whas been adopted by many projects, including Apache Mesos, Kubernetes, and rkt.

Here are key characteristics of CNI:

  • CNI uses a JSON schema to define the desired input and output from CNI network plugins.
  • CNI enables you to run multiple plugins with a container that joins networks driven by different plugins.
  • CNI describes networks in configuration JSON files and instantiates them as new namespaces once CNI plugins are invoked.
  • CNI plugins can support two commands that add and remove container network interfaces from and to networks.

Related content: Read our guide to Kubernetes CNI

Types of Container Networking

None

This container networking type means the container receives a network stack. The container does not have an external network interface but receives a loopback interface. Docker and rkt employ similar behavior when minimal or no networking is used. You can use this model to test containers, assign containers without external communication, and stage containers for future network connections.

Bridge

A Linux bridge provides the internal host network that enables communication between containers on the same host. Bridge networking employs iptables for NAT and port mapping to provide single-host networking. It is the default Docker network type (docker0).

Host

This networking type allows a newly created container to share its network namespace with the host. It provides higher performance, almost at the speed of bare metal networking, and eliminates the need for NAT. However, the downside of this approach is that it can lead to port conflicts. The container has access to the host’s network interfaces. However, unless the container is deployed in privilege mode, it may not reconfigure the host’s network stack.

Overlays

An overlay delivers communications across hosts via networking tunnels, allowing containers on different hosts to behave as though they were on a single machine. Containers connected to different overlay networks cannot communicate—this enables network segmentation.

There are various tunneling technologies. The technology used in Docker libnetwork, for example, is the virtual extensible local area network (VXLAN). Each cloud provider tunnel type creates a dedicated route for each VPC or account. Public cloud support is essential for overlay drivers. Overlays are suitable for hybrid clouds, providing scalability and redundancy without opening public ports.

Underlays

An underlay network driver exposes host interfaces to VMs or containers running on the host. Examples of underlay drivers include MACvlan and IPvlan. Underlays are simpler and more efficient than bridge networking; they don’t require port mapping and are easier to work with than overlays.

Underlays are especially suited to handling on-premise workloads, traffic prioritization, security, compliance, and brownfield use cases. Underlay networking eliminates the need for separate bridges for each VLAN.

Related content: Read our guide to networking concepts

Container Network Performance Management

A major challenge for container management involves using network data to instrument, monitor, and document network performance—known as network performance management (NPM). You can use virtual or physical taps for virtual machines or physical workloads to generate network data and capture packets.

Virtual or physical server applications are likely to be clearly defined and stationary. Managing containers with dynamic or ephemeral instances can be more complicated. You need to ensure the application is instrumented correctly, and it’s not possible to route traffic to container instances that no longer exist. You also need to retain traffic data for compliance purposes, which may be more difficult in a containerized environment.

NPM providers relying on network data let you instrument containerized environments in various ways, including:

  • Directing traffic to network packet brokers or capturing devices with a CNI.
  • Capturing traffic in container pods with a sidecar proxy.
  • Triggering continuous, on-demand data collection with eBPF.

The captured traffic must relate to the data identifying the cluster, pod, and container that created the data. You can achieve this by attaching container management system tags to the related data for containers that have stopped running. This approach allows you to view and manage your data in context.

Continuous monitoring is a critical capability for DevOps and cloud-native deployments. Most organizations purchase new tools from familiar vendors to address the monitoring requirements of their cloud-native applications. However, some organizations opt for new vendors, indicating that the cloud native market remains fertile ground for new vendors releasing specially designed systems for cloud-native technologies.

Read our whitepaper: Definitive guide to container networking, security, and troubleshooting

Enterprise Kubernetes Networking with Calico

Calico’s flexible modular architecture supports a wide range of deployment options, so you can select the best networking approach for your specific environment and needs. This includes the ability to run with a variety of CNI plugins and also leverage Calico’s IPAM capabilities and underlying network types, in non-overlay or overlay modes, with or without BGP.

Calico’s flexible modular architecture for networking includes the following:

  • Calico CNI network plugin – Connects pods to the host network namespace’s L3 routing using a pair of virtual ethernet devices (veth pair).
  • Calico CNI IPAM plugin – Allocates IP addresses for pods out of one or more configurable IP address ranges, dynamically allocating small blocks of IPs per node as required.
  • Overlay network modes – Calico provides both VXLAN or IP-in-IP overlay networks, including cross-subnet only modes.
  • Non-overlay network modes – Calico can provide non-overlay networks running on top of any underlying L2 network, or an L3 network that is either a public cloud network with appropriate cloud provider integration, or a BGP capable network (typically an on-prem network with standard Top-of-Rack routers).
  • Network policy enforcement – Calico’s networking and security policy enforcement engine implements the full range of Kubernetes Network Policy features, plus the extended features of Calico’s Networking and Security Policy.

In addition to providing both network and IPAM plugins, Calico also integrates with a number of other third-party CNI plugins and cloud provider integrations, including Amazon VPC CNI, Azure CNI, Azure cloud provider, Google cloud provider, host local IPAM, and Flannel.

Next steps:

Join our mailing list​

Get updates on blog posts, workshops, certification programs, new releases, and more!