Topics Discussed

  • Baseline Concepts

  • Architecture

  • Network Overlay

  • K8s Services and Policies

  • APIs/Contiv-Netctl Commands

contivpp slide

This is in a nutshell slide slide

One form of K8s cluster networking involves interconnecting pods

  • 4 x patterns of K8s Cluster Networking

  • CNI Network Plugins help “bootstrap” a cluster IP network at pod creation]

  • Multiple CNI plugins to choose from. All provide pod connectivity and most come with function add-ons. Choice of which to use depends on many factors including but not limited desired function, service/policy handling, performance, security, operations and so on. slide

All packets travel through the kernel

  • Traditionally applications have relied on the kernel’s mature TCP/IP stack.

  • Perfectly fine for dishing out web server traffic.

  • Evolving high performance apps could encounter a performance bottleneck with kernel networking

  • Another issues to consider is what happens if the kernel network stack breaks (caused by mis-behaving app or something else) slide

User Space Networking

  • The app and accompanying network stack (e.g. TCP/IP, tunnel encap/decap, IPsec) reside in user space. It could now be referred to as a Cloud Native Network Function (CNF). See next slide.

Other advantages include:

  • Elevated performance throughput as now the user space CNF can talk directly to the physical network interface card (NIC) with the goal of keeping pace with the speed developments happening in this space.

  • Accelerated network innovation development and roll-out. CNF developers can go to town and paint their innovations on a large user space canvas. It is THE opportunity to mandate all CNFs run in user space. It just make sense.

  • Fast recovery. If anything happens to the user space CNF stack (e.g. upgrade, crash, etc.), it DOES NOT bring down the whole node. You just restart it quickly and continue on with your work.

High performance shared memory communications when talking user space - to - user space. Memif for inter-VPP traffic is one example. slide

Some definitions of a CNF

  • For now, let’s assume a CNF provides L2, L3 and is L4-aware for the purposes of NAT and ACL. slide

CNF Software Dataplane slide



Ligato is an open source project that provides a platform and code samples for development of cloud native VNFs. It includes a management/control agent for VPP and a Service Function Chain (SFC) Controller for stitching virtual and physical networking. slide

Baseline CNF vSwitch

CNFs with a Dose of Ligato and slide slide slide

Kubernetes State Reflector slide

Contiv ETCD

  • Specific to Contivpp. Need to avoid any latency associated with system ETCD. slide

Contiv vSwitch slide

Contiv VPP Agent

  • Based on the Ligato VPP agent with some modifications implemented for contivpp. slide

CNI amd STN slide

Contiv UI slide

Topolopgy Graph of a 3 x Node Contivpp Network

  • contiv vSwitches are contained in each node including the master slide slide slide

IPAM slide

Example of contiv vSwitch configuration with addressing provided by IPAM slide

Logical View of a Contivpp Network Overlay slide

Depicts a 3 x node Contivpp Tunnel Overlay

  • 5 x NGINX pods are networked together

  • vxlan tunnel encap is used but other possibilities include SRv6 and MPLS UDP slide

Application Pods

  • set of pods connected to the overlay slide

Pods on each node are connected by vxlan tunnels

  • one in each direction between each set of nodes

  • forms a full mesh with a unique virtual network interface (VNI) slide slide slide slide slide

Contivpp K8s Service VPP-State

  • on the left are the service endpoints are their IP addresses

  • on the right is a snippet of REST API request returning the VPP-NAT entries in the contiv vSwitch

  • this enables K8s service traffic to benefits from the VPP-based high performance dataplane slide slide slide slide slide slide slide slide slide slide slide slide

Back to Overview