Table of Contents generated with DocToc
- Required configuration
- Optional Configuration
Required configuration
The cluster configuration file can be generated by using clusterctl generate cluster
command.
This command actually uses the template file and replace the values surrounded by ${}
with environment variables. You have to set all required environment variables in advance. The following sections explain some more details about what should be configured.
Note: You can use the template file by manually replacing values.
Note: By default the command creates highly available control plane with internal OpenStack cloud provider. If you wish to create highly available control plane with external OpenStack cloud provider or single control plane without load balancer, use external-cloud-provider or without-lb flavor respectively. For example,
# Using 'external-cloud-provider' flavor
clusterctl generate cluster capi-quickstart \
--flavor external-cloud-provider \
--kubernetes-version v1.24.2 \
--control-plane-machine-count=3 \
--worker-machine-count=1 \
> capi-quickstart.yaml
# Using 'without-lb' flavor
clusterctl generate cluster capi-quickstart \
--flavor without-lb \
--kubernetes-version v1.24.2 \
--control-plane-machine-count=1 \
--worker-machine-count=1 \
> capi-quickstart.yaml
OpenStack version
We currently require at least OpenStack Pike.
Operating system image
We currently depend on an up-to-date version of cloud-init otherwise the operating system choice is yours. The kubeadm bootstrap provider we’re using also depends on some pre-installed software like a container runtime, kubelet, kubeadm, etc.. . For an examples how to build such an image take a look at image-builder (openstack).
The image can be referenced by exposing it as an environment variable OPENSTACK_IMAGE_NAME
.
SSH key pair
The SSH key pair is required. You can create one using,
openstack keypair create [--public-key <file> | --private-key <file>] <name>
The key pair name must be exposed as an environment variable OPENSTACK_SSH_KEY_NAME
.
In order to access cluster nodes via SSH, you must either access nodes through the bastion host or configure custom security groups with rules allowing ingress for port 22.
OpenStack credential
Generate credentials
The env.rc script sets the environment variables related to credentials. It’s highly recommend to avoid using admin
credential.
source env.rc <path/to/clouds.yaml> <cloud>
The following variables are set.
Variable | Meaning |
---|---|
OPENSTACK_CLOUD | The cloud name which is used as second argument |
OPENSTACK_CLOUD_YAML_B64 | The secret used by Cluster API Provider OpenStack accessing OpenStack |
OPENSTACK_CLOUD_PROVIDER_CONF_B64 | The content of cloud.conf which is used by OpenStack cloud provider |
OPENSTACK_CLOUD_CACERT_B64 | The content of your custom CA file which can be specified in your clouds.yaml by ca-file , mandatory when openstack endpoint is https |
Note: Only the external cloud provider supports Application Credentials.
Note: you need to set clusterctl.cluster.x-k8s.io/move
label for the secret created from OPENSTACK_CLOUD_YAML_B64
in order to successfully move objects from bootstrap cluster to target cluster. See bug 626 for further information.
Availability zone
The availability zone names must be exposed as an environment variable OPENSTACK_FAILURE_DOMAIN
.
By default, if Availability zone
is not given, all Availability zone
that defined in openstack will be a candidate to provision from, If administrator credential is used then internal
Availability zone which is internal only Availability zone inside nova
will be returned and can cause potential problem, see PR 1165 for further information. So we highly recommend to set Availability zone
explicitly.
DNS server
The DNS servers must be exposed as an environment variable OPENSTACK_DNS_NAMESERVERS
.
Machine flavor
The flavors for control plane and worker node machines must be exposed as environment variables OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR
and OPENSTACK_NODE_MACHINE_FLAVOR
respectively.
Optional Configuration
Log level
When running CAPO with --v=6
the gophercloud client logs its requests to the OpenStack API. This can be helpful during debugging.
External network
If there is only a single external network it will be detected automatically. If there is more than one external network you can specify which one the cluster should use by setting the environment variable OPENSTACK_EXTERNAL_NETWORK_ID
.
The public network id can be obtained by using command,
openstack network list --external
Note: If your openstack cluster does not already have a public network, you should contact your cloud service provider. We will not review how to troubleshoot this here.
API server floating IP
Unless explicitly disabled, a floating IP is automatically created and associated with the load balancer
or controller node. If required, you can specify the floating IP explicitly by spec.apiServerFloatingIP
of OpenStackCluster
.
You have to be able to create a floating IP in your OpenStack in advance. You can create one using,
openstack floating ip create <public network>
Note: Only user with admin role can create a floating IP with specific IP.
Note: When associating a floating IP to a cluster with more than 1 controller node, the floatingIP will be associated to the first controller node and the other controller nodes have no floating IP assigned. When the controller node has the floating IP status down CAPO will NOT auto assign the floating IP address to any other controller node. So we recommend to only set one controller node when floating IP is needed, or please consider using load balancer instead, see issue #1265 for further information.
Disabling the API server floating IP
It is possible to provision a cluster without a floating IP for the API server by setting
OpenStackCluster.spec.disableAPIServerFloatingIP: true
(the default is false
). This will
prevent a floating IP from being allocated.
WARNING
If the API server does not have a floating IP, workload clusters will only deploy successfully when the management cluster and workload cluster control plane nodes are on the same network. This can be a project-specific network, if the management cluster lives in the same project as the workload cluster, or a network that is shared across multiple projects.
In particular, this means that the cluster cannot use
OpenStackCluster.spec.nodeCidr
to provision a new network for the cluster. Instead, useOpenStackCluster.spec.network
to explicitly specify the same network as the management cluster is on.
When the API server floating IP is disabled, it is not possible to provision a cluster without a load balancer without additional configuration (an advanced use-case that is not documented here). This is because the API server must still have a virtual IP that is not associated with a particular control plane node in order to allow the nodes to change underneath, e.g. during an upgrade. When the API server has a floating IP, this role is fulfilled by the floating IP even if there is no load balancer. When the API server does not have a floating IP, the load balancer virtual IP on the cluster network is used.
Restrict Access to the API server
NOTE
This requires “amphora” as load balancer provider at in version >=
v2.12
It is possible to restrict access to the Kubernetes API server on a network level. If required, you can specify
the allowed CIDRs by spec.APIServerLoadBalancer.AllowedCIDRs
of OpenStackCluster
.
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackCluster
metadata:
name: <cluster-name>
namespace: <cluster-namespace>
spec:
allowAllInClusterTraffic: true
apiServerLoadBalancer:
allowedCidrs:
- 192.168.10/24
- 10.10.0.0/16
All known IPs of the target cluster will be discovered dynamically (e.g. you don’t have to take care of target Cluster own Router IP, internal CIDRs or any Bastion Host IP). Note: Please ensure, that at least the outgoing IP of your management Cluster is added to the list of allowed CIDRs. Otherwise CAPO can’t reconcile the target Cluster correctly.
All applied CIDRs (user defined + dynamically discovered) are written back into status.network.apiServerLoadBalancer.allowedCIDRs
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackCluster
metadata:
name: <cluster-name>
namespace: <cluster-namespace>
status:
network:
apiServerLoadBalancer:
allowedCIDRs:
- 10.6.0.0/24 # openStackCluster.Status.Network.Subnet.CIDR
- 10.6.0.90/32 # bastion Host internal IP
- 10.10.0.0/16 # user defined
- 192.168.10/24 # user defined
- 172.16.111.100/32 # bastion host floating IP
- 172.16.111.85/32 # router IP
internalIP: 10.6.0.144
ip: 172.16.111.159
name: k8s-clusterapi-cluster-<cluster-namespace>-<cluster-name>
If you locked out yourself or the CAPO management cluster, you can easily clear the allowed_cidrs
field on OpenStack via
openstack loadbalancer listener unset --allowed-cidrs <listener ID>
Network Filters
If you have a complex query that you want to use to lookup a network, then you can do this by using a network filter. More details about the filter can be found in NetworkParam
By using filters to look up a network, please note that it is possible to get multiple networks as a result. This should not be a problem, however please test your filters with openstack network list
to be certain that it returns the networks you want. Please refer to the following usage example:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackMachineTemplate
metadata:
name: <cluster-name>-controlplane
namespace: <cluster-name>
spec:
networks:
- filter:
name: <network-name>
Multiple Networks
You can specify multiple networks (or subnets) to connect your server to. To do this, simply add another entry in the networks array. The following example connects the server to 3 different networks:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackMachineTemplate
metadata:
name: <cluster-name>-controlplane
namespace: <cluster-name>
spec:
networks:
- filter:
name: myNetwork
tags: myTag
- uuid: your_network_id
- subnet_id: your_subnet_id
Subnet Filters
Rather than just using a network, you have the option of specifying a specific subnet to connect your server to. The following is an example of how to specify a specific subnet of a network to use for your server.
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackMachineTemplate
metadata:
name: <cluster-name>-controlplane
namespace: <cluster-name>
spec:
networks:
- filter:
name: <network-name>
subnets:
- filter:
name: <subnet-name>
Ports
A server can also be connected to networks by describing what ports to create. Describing a server’s connection with ports
allows for finer and more advanced configuration. For example, you can specify per-port security groups, fixed IPs, VNIC type or profile.
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackMachineTemplate
metadata:
name: <cluster-name>-controlplane
namespace: <cluster-name>
spec:
ports:
- network:
id: <your-network-id>
nameSuffix: <your-port-name>
description: <your-custom-port-description>
vnicType: normal
fixedIPs:
- subnet:
id: <your-subnet-id>
ipAddress: <your-fixed-ip>
- subnet:
name: <your-subnet-name>
tags:
- tag1
- tag2
securityGroups:
- <your-security-group-id>
profile:
capabilities:
- <capability>
Any such ports are created in addition to ports used for connections to networks or subnets.
Also, port security
can be applied to specific port to enable/disable the port security
on that port; When not set, it takes the value of the corresponding field at the network level.
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackMachineTemplate
metadata:
name: <cluster-name>-controlplane
namespace: <cluster-name>
spec:
ports:
- networkId: <your-network-id>
...
disablePortSecurity: true
...
Security groups
Security groups are used to determine which ports of the cluster nodes are accessible from where.
If spec.managedSecurityGroups
of OpenStackCluster
is set to true
, two security groups named
k8s-cluster-${NAMESPACE}-${CLUSTER_NAME}-secgroup-controlplane
and
k8s-cluster-${NAMESPACE}-${CLUSTER_NAME}-secgroup-worker
will be created and added to the control
plane and worker nodes respectively.
By default, these groups have rules that allow the following traffic:
- Control plane nodes
- API server traffic from anywhere
- Etcd traffic from other control plane nodes
- Kubelet traffic from other cluster nodes
- Calico CNI traffic from other cluster nodes
- Worker nodes
- Node port traffic from anywhere
- Kubelet traffic from other cluster nodes
- Calico CNI traffic from other cluster nodes
To use a CNI other than Calico, the flag OpenStackCluster.spec.allowAllInClusterTraffic
can be
set to true
. With this flag set, the rules for the managed security groups permit all traffic
between cluster nodes on all ports and protocols (API server and node port traffic is still
permitted from anywhere, as with the default rules).
If this is not flexible enough, pre-existing security groups can be added to the
spec of an OpenStackMachineTemplate
, e.g.:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackMachineTemplate
metadata:
name: ${CLUSTER_NAME}-control-plane
spec:
template:
spec:
securityGroups:
- name: allow-ssh
Tagging
You have the ability to tag all resources created by the cluster in the OpenStackCluster
spec. Here is an example how to configure tagging:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackCluster
metadata:
name: <cluster-name>
namespace: <cluster-name>
spec:
tags:
- cluster-tag
To tag resources specific to a machine, add a value to the tags field in the OpenStackMachineTemplate
spec like this:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackMachineTemplate
metadata:
name: <cluster-name>-controlplane
namespace: <cluster-name>
spec:
tags:
- machine-tag
Metadata
You also have the option to add metadata to instances. Here is a usage example:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackMachineTemplate
metadata:
name: <cluster-name>-controlplane
namespace: <cluster-name>
spec:
serverMetadata:
name: bob
nickname: bobbert
Boot From Volume
For example in OpenStackMachineTemplate
set spec.rootVolume.diskSize
to something greater than 0
means boot from volume.
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackMachineTemplate
metadata:
name: <cluster-name>-controlplane
namespace: <cluster-name>
spec:
...
rootVolume:
diskSize: <image size>
volumeType: <a cinder volume type (*optional)>
availabilityZone: <the cinder availability zone for the root volume (*optional)>
...
If volumeType
is not specified, cinder will use the default volume type.
If availabilityZone
is not specified, the volume will be created in the cinder availability zone specified in the MachineSpec’s failureDomain
. This same value is also used as the nova availability zone when creating the server. Note that this will fail if cinder and nova do not have matching availability zones. In this case, cinder availabilityZone
must be specified explicitly on rootVolume
.
Timeout settings
The default timeout for instance creation is 5 minutes. If creating servers in your OpenStack takes a long time, you can increase the timeout. You can set a new value, in minutes, via the envorinment variable CLUSTER_API_OPENSTACK_INSTANCE_CREATE_TIMEOUT
in your Cluster API Provider OpenStack controller deployment.
Custom pod network CIDR
If 192.168.0.0/16
is already in use within your network, you must select a different pod network CIDR. You have to replace the CIDR 192.168.0.0/16
with your own in the generated file.
Accessing nodes through the bastion host via SSH
Enabling the bastion host
To configure the Cluster API Provider for OpenStack to create a SSH bastion host, add this line to the OpenStackCluster spec after clusterctl generate cluster
was successfully executed:
spec:
...
bastion:
enabled: true
instance:
flavor: <Flavor name>
image: <Image name>
sshKeyName: <Key pair name>
All parameters are mutable during the runtime of the bastion host.
The bastion host will be re-created if it’s enabled and the instance spec has been changed.
This is done by a simple checksum validation of the instance spec which is stored in the OpenStackCluster
annotation infrastructure.cluster.x-k8s.io/bastion-hash
.
A floating IP is created and associated to the bastion host automatically, but you can add the IP address explicitly:
spec:
...
bastion:
...
floatingIP: <Floating IP address>
If managedSecurityGroups: true
, security group rule opening 22/tcp is added to security groups for bastion, controller, and worker nodes respectively. Otherwise, you have to add securityGroups
to the bastion
in OpenStackCluster
spec and OpenStackMachineTemplate
spec template respectively.
Obtain floating IP address of the bastion node
Once the workload cluster is up and running after being configured for an SSH bastion host, you can use the kubectl get openstackcluster command to look up the floating IP address of the bastion host (make sure the kubectl context is set to the management cluster). The output will look something like this:
$ kubectl get openstackcluster
NAME CLUSTER READY NETWORK SUBNET BASTION
nonha nonha true 2e2a2fad-28c0-4159-8898-c0a2241a86a7 53cb77ab-86a6-4f2c-8d87-24f8411f15de 10.0.0.213