Container Network Interface

Container Network Interface

MKE 4k supports both the Kube-router and Calico OSS Container Network Interface (CNI) plugins, to enable the networking functionalities needed for container communication and management within a cluster.

⚠️
Calico OSS is the only CNI that is supported for migrating configuration during an MKE3 to MKE 4k upgrade.

Configuration example

The network section of the mke4.yaml configuration file renders as follows:

 network:
    cplb:
      disabled: true
    kubeProxy:
      iptables:
        minSyncPeriod: 0s
        syncPeriod: 0s
      ipvs:
        minSyncPeriod: 0s
        syncPeriod: 0s
        tcpFinTimeout: 0s
        tcpTimeout: 0s
        udpTimeout: 0s
      metricsBindAddress: 0.0.0.0:10249
      mode: iptables
      nftables:
        minSyncPeriod: 0s
        syncPeriod: 0s
    multus:
      enabled: false
    nllb:
      disabled: true
    nodePortRange: 32768-35535
    serviceCIDR: 10.96.0.0/16
    providers:
    - enabled: true
      extraConfig:
        cidrV4: 192.168.0.0/16
        linuxDataplane: Iptables
        loglevel: Info
      provider: calico
    - enabled: false
      provider: custom
    - enabled: false
      extraConfig:
        cidrV4: 192.168.0.0/16
        v: "5"
      provider: kuberouter

Network configuration

The following table includes details on all of the configurable network fields.

FieldDescriptionValuesDefault
serviceCIDRSets the IPv4 range of IP addresses for services in a Kubernetes cluster.Valid IPv4 CIDR10.96.0.0/16
providersSets the provider for the active CNI.calicocalico

CNI providers configuration

Kube-router

Configuration for the Kube-router CNI can be done by referring to the documentation in https://github.com/cloudnativelabs/kube-router/blob/master/docs/user-guide.md#command-line-options.

Two parameters [cidrV4 and v] are used to specify the ipv4 cidr and log level for the cluster. Any other parameters specified in the extraConfig section of the CNI are passed as-is in the form of a key value pair by appending ‘–’ before the key.

Calico OSS

The following table includes details on the configurable settings for the Calico provider.

FieldDescriptionValuesDefault
enabledSets the name of the external storage provider. AWS is currently the only available option.truetrue
cidrV4Sets the IP pool in the Kubernetes cluster from which Pods are allocated.Valid IPv4 CIDR192.168.0.0/16

You can easily modify cidrV4 prior to cluster deployment. Contact Mirantis Support, however, if you need to modify clusterCIDRIPv4 once your cluster has been deployed.
linuxDataplaneSets the dataplane for Calico CNI.IptablesIptables
loglevelSets the log level for the CNI components.Info, DebugInfo

The default network configuration described herein offers a serviceable, low maintenance solution. If, however, you want more control over your network configuration environment, MKE 4k exposes maximal configuration for the Calico CNI through which you can configure your networking to the fullest extent allowed by the provider. For this, you will use the values.yaml key, in which case an example networking would resemble the following:

 network:
    cplb:
      disabled: true
    kubeProxy:
      iptables:
        minSyncPeriod: 0s
        syncPeriod: 0s
      ipvs:
        minSyncPeriod: 0s
        syncPeriod: 0s
        tcpFinTimeout: 0s
        tcpTimeout: 0s
        udpTimeout: 0s
      metricsBindAddress: 0.0.0.0:10249
      mode: iptables
      nftables:
        minSyncPeriod: 0s
        syncPeriod: 0s
    multus:
      enabled: false
    nllb:
      disabled: true
    nodePortRange: 32768-35535
    serviceCIDR: 10.96.0.0/16
    providers:
    - enabled: true
      extraConfig:
        loglevel: Info
        values.yaml: |-
          kubeletVolumePluginPath: /var/lib/k0s/kubelet
          installation:
            logging:
              cni:
                logSeverity: Debug
            cni:
              type: Calico
            calicoNetwork:
              linuxDataplane: Iptables
              ipPools:
              - cidr: 192.168.0.0/15
                encapsulation: VXLAN
          resources:
            requests:
              cpu: 250m
          defaultFelixConfiguration:
            enabled: true
            wireguardEnabled: false
            wireguardEnabledV6: false          
      provider: calico
    - enabled: false
      provider: custom
    - enabled: false
      extraConfig:
        cidrV4: 192.168.0.0/16
        v: "5"
      provider: kuberouter
  • You must choose whether to specify an exact YAML specification for the Helm installation of Tigera Operator during the initial cluster installation.
  • The supplied YAML for values.yaml must include the exact first line kubeletVolumePluginPath: /var/lib/k0s/kubelet, otherwise the MKE 4k installation will fail.

ℹ️

Refer to the official Tigera Operator documentation for:

You can view the full values.yaml specification for the Helm chart needed to install Tigera Operator at the Project Calico GitHub.

The network configuration generated as a result of upgrading to MKE 4k from an existing MKE 3 cluster always uses YAML. As such clusters have at least one existing IP pool, however, the CIDR and dataplane values are specified outside of the YAML, as illustrated below:

  network:
    cplb:
      disabled: true
    kubeProxy:
      iptables:
        minSyncPeriod: 0s
        syncPeriod: 0s
      ipvs:
        minSyncPeriod: 0s
        syncPeriod: 0s
        tcpFinTimeout: 0s
        tcpTimeout: 0s
        udpTimeout: 0s
      metricsBindAddress: 0.0.0.0:10249
      mode: iptables
      nftables:
        minSyncPeriod: 0s
        syncPeriod: 0s
    multus:
      enabled: false
    nllb:
      disabled: true
    nodePortRange: 30000-32768
    serviceCIDR: 10.96.0.0/16
    providers: 
    - enabled: true
      extraConfig:
        cidrV4: 192.168.0.0/15
        linuxDataplane: Iptables
        loglevel: DEBUG
        values.yaml: |-
          kubeletVolumePluginPath: /var/lib/k0s/kubelet
          installation:
            registry: ghcr.io/mirantiscontainers/
            cni:
              type: Calico
            calicoNetwork:
              bgp: Disabled
              linuxDataplane: Iptables
          resources:
            requests:
              cpu: 250m
          tigeraOperator:
            version: v1.37.1
            registry: ghcr.io/mirantiscontainers/
          defaultFelixConfiguration:
            enabled: true
            bpfConnectTimeLoadBalancing: TCP
            bpfHostNetworkedNATWithoutCTLB: Enabled
            bpfLogLevel: Debug
            floatingIPs: Disabled
            logSeverityScreen: Debug
            logSeveritySys: Debug
            reportingInterval: 0s
            vxlanPort: 4789
            vxlanVNI: 10037          
      provider: calico
    - enabled: false
      provider: custom
ℹ️
  • MKE 4k uses a static port range for Kubernetes NodePorts, from 32768 to 35535.
  • Following a successful MKE 3 to MKE 4k upgrade, a list displays that presents the ports that no longer need to be opened on manager or worker nodes. These ports can be blocked.

Limitations

  • MKE 4k does not support Calico Enterprise.
  • Only clusters that use the default Kubernetes proxier iptables can be upgraded from MKE3 to MKE 4k.
  • Only KDD-backed MKE 3 clusters can be upgraded to MKE 4k. Refer to Upgrade from MKE 3.7 or 3.8 for more information.
Last updated on