Bootstrap an Kubernetes 1.20 cluster with Ansible
Updated 1 year ago Published 1 year ago
Bootstrap an Kubernetes 1.20 cluster with Ansible
This guide assumed you already have a provisioned Proxmox host.
You will also need to install ansible and ansible galaxy.
Leading on from one of my previous posts where I ran over how you use a Hertzner server to boostrap a Kubernetes, I have in the meanwhile moved away and migrated all of my servers to my shiny new Homelab.
I still make exclusive use or Proxmox but I also have a a fairly beefy Trunas scale NAS which I use to host all of my internal infrastructure's storage.
Anyway back to provisioning a Kubernetes cluster with Ansible.
About two years ago I decided that I needed an automated way to provision K8 clusters thus removing the need for staggered scripts.
Anyway onto the implementation.
Step one - Configure Proxmox storage
Before you can provision anything you need to ensure that you have a provision storage target to store your new shiny VM's against.
You can just use you base volume and for most folk this will totally fine.
For my setup I mount an ISCSI volume which is served from a Truenas ZFS volume.
Alternatively if you have a spare disk available you can just create a new LVM volume and use that.
Step two - Configure Proxmox networking
Once you have you storage configured you will need to make sure you networking is configured.
Proxmox requires the use of virtual bridges for inter VM communication.
By default Proxmox will always create a virtual bridge called "VM0", this will inherit the networking configuration that you provided when you setup you Proxmox node.
Creating a virtual bridge is pretty simple.
- Login to the Proxmox web interface
- Navigate to your nodes "Network" panel.
- Click on "Create" and select "Linux Bridge"
- Fill out the network config you wish to use (note this cannot match your default VM0 bridge)
- Press create a BAM you have a new bridge
I would recommend that you create an entirely new subnet.
My core network relies on a PFsense firewall and my network backbone relies utilises unify, I will make a follow up for this setup but for now I will assume you have setup this yourself.
Step three - Bootstrap the cluster
This steps is surprising simple.
The repo already contains my base inventory file which is probably not that useful for you so you can delete it and create a new one.
You can simply copy the configuration below and create your own `inventory.ini` file.
[k8_prod_masters]
master1.k8 node_name=master1.k8 node_ip=10 id=40310 ip=10.150.10.10
[k8_prod_masters:vars]
gateway=10.150.10.1
dns=10.150.10.1
vlan=900subnet=/24
metallb_cidr=10.150.10.128/25
nginx_load_balancer_ip=10.150.10.100
ansible_ssh_user='ubuntu'
ansible_connection='ssh'
ansible_python_interpreter=/usr/bin/python3
ansible_ssh_common_args='-o ServerAliveInterval=5 -o StrictHostKeyChecking=no'
devnotnull_redis=10.150.10.141
devnotnull_ui=10.150.10.134
devnotnull_api=10.150.10.143
[k8_prod_minions]
minion1.k8 node_name=minion1.k8 node_ip=20 id=40320 ip=10.150.10.20 minion2.k8 node_name=minion2.k8 node_ip=21 id=40321 ip=10.150.10.21 minion3.k8 node_name=minion3.k8 node_ip=22 id=40322 ip=10.150.10.22 minion4.k8 node_name=minion4.k8 node_ip=23 id=40323 ip=10.150.10.23 minion5.k8 node_name=minion5.k8 node_ip=24 id=40324 ip=10.150.10.24
[k8_prod_minions:vars]
gateway=10.150.10.1dns=10.150.10.1vlan=900subnet=/24ansible_ssh_user='ubuntu'ansible_connection='ssh'ansible_python_interpreter=/usr/bin/python3ansible_ssh_common_args='-o ServerAliveInterval=5 -o StrictHostKeyChecking=no'