Skip to content

underlay network solution of cloud native, for bare metal, VM and public cloud

License

Notifications You must be signed in to change notification settings

my-git9/spiderpool

 
 

Repository files navigation

Spiderpool

Go Report Card CodeFactor codecov Auto Version Release Auto Nightly CI CII Best Practices Nightly K8s Matrix badge badge badge badge


English | 简体中文

Spiderpool is a CNCF Landscape Level Project.

Introduction

Spiderpool is a Kubernetes underlay network solution. It provides rich IPAM features and CNI integration capabilities, powering CNI projects in the open source community, allowing multiple CNIs to collaborate effectively. It enables underlay CNI to run perfectly in environments such as bare metal, virtual machines, and any public cloud.

Why developing Spiderpool? Currently, the open source community does not provide comprehensive, friendly, and intelligent underlay network solutions, so Spiderpool aims to provide many innovative features:

  • Rich IPAM feature. Shared and dedicated IP pools, assigning fixed IP address, automatic operation of dedicated IP pools for creating, scaling, and reclaiming. It could match kinds of underlay network requirements.

  • Underlay CNI and overlay CNI cooperation, multiple CNI interfaces for pod. Spiderpool helps assign IP address to multiple underlay interfaces, coordinate policy route between interfaces to ensure consistence data path of request and reply packets. Multiple CNIs cooperate to reduce hardware requirements for deploying the cluster.

  • Enhance underlay CNI like Macvlan CNI, ipvlan CNI, SR-IOV CNI, ovs CNI to connect Pod and host to access clusterIP and check pod health, and to detect IP conflict and gateway accessibility.

  • Not only limited to bare metal environments in data centers, but also providing a unified underlay CNI solution for openstack, vmware, and various public cloud scenarios.

underlay CNI

There are two technologies in cloud-native networking: "overlay network" and "underlay network". Despite no strict definition for underlay and overlay networks in cloud-native networking, we can simply abstract their characteristics from many CNI projects. The two technologies meet the needs of different scenarios.

The article provides a brief comparison of IPAM and network performance between the two technologies, which offers better insights into the unique features and use cases of Spiderpool.

Why underlay network solutions? The following requirements necessitate underlay network solutions:

  • For applications with high-performance network requirements, the underlay network solution can provide advantages of low network latency and high throughput, compared to the overlay network solution.

  • Traditional host applications, directly expose services through host IP, unable to accept NAT mapping, or different transaction stream have already been separated based on VLAN subnets. When migrating to the kubernetes, underlay network solutions can provide lower migration costs of the network.

  • Network security requirements, like using firewall or VLAN isolation to implement network security, like using traditional network observation means to implement monitoring.

  • The underlay network solution allows for flexible customization of VLAN subnets for application access, applications could occupy independent subnet to ensure bandwidth isolation of underlying network. It suits for applications such as kubevirt, CSI storage project, log collection project, etc.

Architecture

arch

Spiderpool consists of the following components:

  • Spiderpool controller: a set of deployments that manage CRD validation, status updates, IP recovery, and automated IP pools

  • Spiderpool agent: a set of daemonsets that help Spiderpool plugin by performing IP allocation and coordinator plugin for information synchronization.

  • Spiderpool plugin: a binary plugin on each host that CNI can utilize to implement IP allocation.

  • coordinator plugin: a binary plugin on each host that CNI can use for multi-NIC route coordination, IP conflict detection, and host connectivity.

  • ifacer plugin: A binary plugin on each host that helps CNIs such as macvlan and ipvlan dynamically create bond and vlan interfaces

On top of its own components, Spiderpool relies on open-source underlay CNIs to allocate network interfaces to Pods. You can use Multus CNI to manage multiple NICs and CNI configurations.

Any CNI project compatible with third-party IPAM plugins can work well with Spiderpool, such as:

Macvlan CNI, vlan CNI, ipvlan CNI, SR-IOV CNI, ovs CNI, Multus CNI, Calico CNI, Weave CNI

Use case: one or more underlay CNIs

arch_underlay

In underlay networks, Spiderpool can work with underlay CNIs such as Macvlan CNI and SR-IOV CNI to provide the following benefits:

  • Rich IPAM capabilities for underlay CNIs, including shared/fixed IPs, multi-NIC IP allocation, and dual-stack support

  • One or more underlay NICs for Pods with coordinating routes between multiple NICs to ensure smooth communication with consistent request and reply data paths

  • Enhanced connectivity between open-source underlay CNIs and hosts using additional veth network interfaces and route control. This enables clusterIP access, local health checks of applications, and much more

How can you deploy containers using a single underlay CNI, when a cluster has multiple underlying setups?

  • Some nodes in the cluster are virtual machines like VMware that don't enable promiscuous mode, while others are bare metal and connected to traditional switch networks. What CNI solution should be deployed on each type of node?

  • Some bare metal nodes only have one SR-IOV high-speed NIC that provides 64 VFs. How can more pods run on such a node?

  • Some bare metal nodes have an SR-IOV high-speed NIC capable of running low-latency applications, while others have only ordinary network cards for running regular applications. What CNI solution should be deployed on each type of node?

By simultaneously deploying multiple underlay CNIs through Multus CNI configuration and Spiderpool's IPAM abilities, resources from various infrastructure nodes across the cluster can be integrated to solve these problems.

underlay

For example, as shown in the above diagram, different nodes with varying networking capabilities in a cluster can use various underlay CNIs, such as SR-IOV CNI for nodes with SR-IOV network cards, Macvlan CNI for nodes with ordinary network cards, and ipvlan CNI for nodes with restricted network access (e.g., VMware virtual machines with limited layer 2 network forwarding).

Use case: underlay CNI collaborates with overlay CNI

arch_underlay

In overlay networks, Spiderpool uses Multus to add an overlay NIC (such as Calico or Cilium) and multiple underlay NICs (such as Macvlan CNI or SR-IOV CNI) for each Pod. This offers several benefits:

  • Rich IPAM features for underlay CNIs, including shared/fixed IPs, multi-NIC IP allocation, and dual-stack support.

  • Route coordination for multiple underlay CNI NICs and an overlay NIC for Pods, ensuring the consistent request and reply data paths for smooth communication.

  • Use the overlay NIC as the default one with route coordination and enable local host connectivity to enable clusterIP access, local health checks of applications, and forwarding overlay network traffic through overlay networks while forwarding underlay network traffic through underlay networks.

The integration of Multus CNI and Spiderpool IPAM enables the collaboration of an overlay CNI and multiple underlay CNIs. For example, in clusters with nodes of varying network capabilities, Pods on bare-metal nodes can access both overlay and underlay NICs. Meanwhile, Pods on virtual machine nodes only serving east-west services are connected to the Overlay NIC. This approach provides several benefits:

  • Applications providing east-west services can be restricted to being allocated only the overlay NIC while those providing north-south services can simultaneously access overlay and underlay NICs. This results in reduced Underlay IP resource usage, lower manual maintenance costs, and preserved pod connectivity within the cluster.

  • Fully integrate resources from virtual machines and bare-metal nodes.

overlay

Use case: underlay CNI on public cloud and VM

It is hard to implement underlay CNI in public cloud, openstack, vmvare. It requires the vendor underlay CNI on specific environments, as these environments typically have the following limitations:

  • The IAAS network infrastructure implements MAC restrictions for packets. On the one hand, security checks are conducted on the source MAC to ensure that the source MAC address is the same as the MAC address of VM network interface. On the other hand, restrictions have been placed on the destination MAC, which only supports packet forwarding by the MAC address of VM network interfaces.

    The MAC address of the Pod in the common CNI plugin is newly generated, which leads to Pod communication failure.

  • The IAAS network infrastructure implements IP restrictions on packets. Only when the destination and source IP of the packet are assigned to VM, packet could be forwarded rightly.

    The common CNI plugin assigns IP addresses to Pods that do not comply with IAAS settings, which leads to Pod communication failure.

Spiderpool provides IP pool based on node topology, aligning with IP allocation settings of VMs. In conjunction with ipvlan CNI, it provides underlay CNI solutions for various public cloud environments.

Quick start

Refer to Quick start, set up a cluster quickly.

Major features

  • For applications requiring static IP addresses, it could be supported by IP pools owning limited IP adddress set and pod affinity. See example for more details.

    For applications not requiring static IP addresses, they can share an IP pool. See example for more details.

  • For stateful applications, IP addresses can be automatically fixed for each Pod, and the overall IP scaling range can be fixed as well. See example for more details.

  • Subnet feature, on the one hand, could help to separate the responsibility from the infrastructure administrator and the application administrator.

    On the other hand, it supports to automatically create and dynamically scale the fixed IP ippools to each applcation requiring static IPs. which could help reduce operation burden of IP pools burden. See example for more details. In additional to kubernetes-native controller, subnet feature also supports third-party pod controllers based on operator. See example for details.

  • For Pods of an application run across different network zones, it could assign IP addresses of different subnets. See example for details.

  • Support to assign IP address from different subnets to multiple NICs of a Pod, and help coordinate policy route between interfaces to ensure consistent data path of request and reply packets.

    For scenarios involving multiple Underlay NICs, please refer to the example.

    For scenarios involving one Overlay NIC and multiple Underlay NICs, please refer to the example.

  • It supports to set default IP pools for the cluster or for the namespace. Besides, A IP pool could be shared by the whole cluster or bound to a specified namespace. See example for details.

  • Strengthen CNI like Macvlan CNI, ipvlan CNI, SR-IOV CNI, ovs CNI, to access clusterIP and pod healthy check (example), to detect IP conflict and gateway reachability (example).

  • Node based IP pool, supporting underlay CNI running on bare metal (example), vmware virtual machine (example), openstack virtual machine (example), public cloud (example).

  • When starting the Pod, it could help dynamically build the bond interface and vlan interface for the master interface of Macvlan CNI, ipvlan CNI. See example for details.

  • It could specify customized routes by IP pool and pod annotation. See example for details.

  • Easy generation of Multus NetworkAttachmentDefinition custom resources with best-practice CNI configuration, also ensure well formatted JSON to improve experience. See example for details.

  • Multiple IP pools can be set for the application for prevent IP address from running out. See example for details.

  • Set reserved IPs that will not be assigned to Pods, it can avoid misusing IP addresses already taken by hosts out of the cluster. See example for details.

  • Outstanding performance for assigning and releasing Pod IPs, showcased in the test report.

  • Well-designed IP reclaim mechanism could help assign IP address in time and quickly recover from the breakdown for the cluster or application. See example for details.

  • All above features can work in ipv4-only, ipv6-only, and dual-stack scenarios. See example for details.

  • Support AMD64 and ARM64.

  • Metrics

License

Spiderpool is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.

  

Spiderpool enriches the CNCF CLOUD NATIVE Landscape.

About

underlay network solution of cloud native, for bare metal, VM and public cloud

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 91.0%
  • Shell 5.1%
  • Makefile 3.0%
  • Other 0.9%