The OpenShift SDN enables communication between pods across the {product-title} cluster, establishing a pod network. Two SDN plug-ins are currently available (ovs-subnet and ovs-multitenant), which provide different methods for configuring the pod network. A third (ovs-networkpolicy) is currently in Tech Preview.
The upstream Kubernetes project does not come with a default network solution. Instead, Kubernetes has developed a Container Network Interface (CNI) to allow network providers for integration with their own SDN solutions.
There are several OpenShift SDN plugins available out of the box from Red Hat, as well as third-party plug-ins.
Red Hat has worked with a number of SDN providers to certify their SDN network solution on {product-title} via the Kubernetes CNI interface, including a support process for their SDN plug-in through their product’s entitlement process. Should you open a support case with OpenShift, Red Hat can facilitate an exchange process so that both companies are involved in meeting your needs.
The following SDN solutions are validated and supported on {product-title} directly by the 3rd party vendor:
-
Cisco Contiv (™)
-
Juniper Contrail (™)
-
Nokia Nuage (™)
-
Tigera Calico (™)
-
VMware NSX-T (™)
Installing VMware NSX-T (™) on {product-title}
VMware NSX-T (™) provides an SDN and security infrastructure to build cloud-native application environments. In addition to vSphere hypervisors (ESX), these environments include KVM and native public clouds.
The current integration requires a new install of both NSX-T and {product-title}. Currently, NSX-T version 2.0 is supported, and only supports the use of ESX and KVM hypervisors at this time.
See the NSX-T Container Plug-in for OpenShift - Installation and Administration Guide for more information.
For initial advanced installations,
the ovs-subnet plug-in is installed and configured by default, though it can
be
overridden during installation
using the
os_sdn_network_plugin_name
parameter,
which is configurable in the Ansible inventory file.
# Configure the multi-tenant SDN plugin (default is 'redhat/openshift-ovs-subnet') # os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant' # Configure the NetworkPolicy SDN plugin (Tech Preview) # os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' # Disable the OpenShift SDN plugin # openshift_use_openshift_sdn=False # Configure SDN cluster network CIDR block. This network block should # be a private block and should not conflict with existing network # blocks in your infrastructure that pods may require access to. # Can not be changed after deployment. #osm_cluster_network_cidr=10.1.0.0/16 # default subdomain to use for exposed routes #openshift_master_default_subdomain=apps.test.example.com # Configure SDN cluster network and kubernetes service CIDR blocks. These # network blocks should be private and should not conflict with network blocks # in your infrastructure that pods may require access to. Can not be changed # after deployment. #osm_cluster_network_cidr=10.1.0.0/16 #openshift_portal_net=172.30.0.0/16 # Configure number of bits to allocate to each host’s subnet e.g. 8 # would mean a /24 network on the host. #osm_host_subnet_length=8 # This variable specifies the service proxy implementation to use: # either iptables for the pure-iptables version (the default), # or userspace for the userspace proxy. #openshift_node_proxy_mode=iptables
Cluster administrators can control pod network settings on masters by modifying
parameters in the networkConfig
section of the
master configuration file
(located at /etc/origin/master/master-config.yaml by default):
networkConfig:
clusterNetworkCIDR: 10.128.0.0/14 (1)
hostSubnetLength: 9 (2)
networkPluginName: "redhat/openshift-ovs-subnet" (3)
serviceNetworkCIDR: 172.30.0.0/16 (4)
-
Cluster network for node IP allocation
-
Number of bits for pod IP allocation within a node
-
Set to redhat/openshift-ovs-subnet for the ovs-subnet plug-in, redhat/openshift-ovs-multitenant for the ovs-multitenant plug-in, or redhat/openshift-ovs-networkpolicy for the ovs-networkpolicy plug-in
-
Service IP allocation for the cluster
Important
|
The |
Cluster administrators can control pod network settings on nodes by modifying
parameters in the networkConfig
section of the
node configuration file
(located at /etc/origin/node/node-config.yaml by default):
networkConfig:
mtu: 1450 (1)
networkPluginName: "redhat/openshift-ovs-subnet" (2)
-
Maximum transmission unit (MTU) for the pod overlay network
-
Set to redhat/openshift-ovs-subnet for the ovs-subnet plug-in, redhat/openshift-ovs-multitenant for the ovs-multitenant plug-in, or redhat/openshift-ovs-networkpolicy for the ovs-networkpolicy plug-in
If you are already using one SDN plug-in and want to switch to another:
$ oc delete clusternetwork --all $ oc delete hostsubnets --all $ oc delete netnamespaces --all
When switching from the ovs-subnet to the ovs-multitenant OpenShift SDN plug-in, all the existing projects in the cluster will be fully isolated (assigned unique VNIDs). Cluster administrators can choose to modify the project networks using the administrator CLI.
Check VNIDs by running:
$ oc get netnamespace
If a host that is external to {product-title} requires access to the cluster network, you have two options:
-
Configure the host as an {product-title} node but mark it unschedulable so that the master does not schedule containers on it.
-
Create a tunnel between your host and a host that is on the cluster network.
Both options are presented as part of a practical use-case in the documentation for configuring routing from an edge load-balancer to containers within OpenShift SDN.
As an alternative to the default SDN, {product-title} also provides Ansible playbooks for installing flannel-based networking. This is useful if running {product-title} within a cloud provider platform that also relies on SDN, such as Red Hat OpenStack Platform, and you want to avoid encapsulating packets twice through both platforms.
Important
|
This is only supported for {product-title} on Red Hat OpenStack Platform. |
Important
|
Neutron port security must be configured, even when security groups are not being used. |
To enable flannel within your {product-title} cluster:
-
Neutron port security controls must be configured to be compatible with Flannel. The default configuration of Red Hat OpenStack Platform disables user control of
port_security
. Configure Neutron to allow users to control theport_security
setting on individual ports.-
On the Neutron servers, add the following to the /etc/neutron/plugins/ml2/ml2_conf.ini file:
[ml2] ... extension_drivers = port_security
-
Then, restart the Neutron services:
service neutron-dhcp-agent restart service neutron-ovs-cleanup restart service neutron-metadata-agentrestart service neutron-l3-agent restart service neutron-plugin-openvswitch-agent restart service neutron-vpn-agent restart service neutron-server restart
-
-
Set the following variables in your Ansible inventory file before running the installation:
openshift_use_openshift_sdn=false (1) openshift_use_flannel=true (2) flannel_interface=eth0
-
Set
openshift_use_openshift_sdn
tofalse
to disable the default SDN. -
Set
openshift_use_flannel
totrue
to enable flannel in place.
-
-
Optionally, you can specify the interface to use for inter-host communication using the
flannel_interface
variable. Without this variable, the {product-title} installation uses the default interface.