Building a Kubernetes cluster using cheap second hand workstations.

Building a cheap Kubernetes cluster using cheap second hand workstations.

Click to view full article

Javascript & Functional programming at scale.

Functional programming is the current hot topic within the Javascript eco-sqhere at the moment. But my question is why ?

Having come from a more traditional Java background I have been exposed to functional idelogieies via both Scala and now the new Javascript functional scraze.

Click to view full article

How to create a simple AWS Lamda function with Typescript.

Over the course of this tutorial you will be deploying a set of simple todo Lambda function, you will be deploying 3 separate artefacts.

  • Create endpoint
  • Get all todo's endpoint
  • Get by id todo's endpoint

What will we be working with

`AWS Lambda` is a service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second.

`AWS DynamoDB` is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed

NoSQL database

database with built-in security, backup and restore, and in-memory caching for internet-scale applications.

Step one - Install the Serverless framework

Install the Serverless Framework via npm which was already installed when you installed Node.js.

Open up a terminal and type npm install -g serverless to install Serverless.

npm install -g serverless

Once the installation process is done you can verify that Serverless is installed successfully by running the following command in your terminal:

serverless

To see which version of serverless you have installed run:


serverless --version

Step two - Install the serverless framework

Now that you have Serverless installed you will need to crate a new set of access credentials, These credentials will be used to push your local project into your AWS account.

Firstly visit

Search for IAM (Identity & access management)

Click on the users link

Click on your own user

Click on the 'Security credential's' tab

Click on the 'Security credential's' tab and then on 'Create access key'

You should now make a copy of both your access key id and secret access key.

Now you have a valid security key you can use to request against your AWS account.

Step three - Configure your new AWS account profile

Now that you have you acess key and secret you need to add the crential to your local machine as a new profile.

You can simply create a new aws proflile via the command line. For the duration of this tutorial we will be using a profile named 'tutorial'.

aws configure --profile tutorial

You will then be prompted to enter your access key id and secret.

Also set your aws region to `eu-west-1`

└─[$] <> aws configure --profile tutorial
AWS Access Key ID [None]: ****
AWS Secret Access Key [None]: ****
Default region name [None]: eu-west-1
Default output format [None]:

CONGRATS, You can now deploy to AWS.

Step four - Build and deploy the sample project

Build and start the Lambda locally

cd packages/ecom-lamda-typescript/
yarn install 
yarn build 
sls offline start

This will start the Lambda locally, There will be three endpoints started as part of this application.

POST /todo
GET /todo
GET /todo/{id}

This repo contains a bundled postman collection, You can download this export and import in into your local instance of postman.

It is

You can create an new todo item with postman

POST
URL
http://localhost:3001/todo

BODY
{
	"text" : "This is the todo text"
}

You can request all todo entries

GET
URL
http://localhost:3001/todo

You can request a single todo entry.

GET
URL
http://localhost:3001/todo/{ID}

Step five - Deploying the Serverless application.

Now that you have successfully run the Lambda function locally you should be able to deploy it to the AWS account you configured in step 3.

We will start by rebuilding the application to ensure that the deployed artifact contains all of the latest updates.

yarn build

Once you have build the latest version of the application you can deploy you application via the Serverless CLI.

You will need to ensure that you use the `tutorial` profile that you created previously.


export AWS_PROFILE="tutorial" && sls deploy

If all goes well then the Serverless framework will provide you will a list of all of the deployed endpoints and some additional metadata.

└─[$] <git:(master)> sls deploy --profile tutorieg
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Service files not changed. Skipping deployment...
Service Information
service: sls-typescript-todo-api-with-dynamodb
stage: dev
region: eu-west-1
stack: sls-typescript-todo-api-with-dynamodb-dev
resources: 25
api keys:
  None
endpoints:
  POST - https://2vwe9db8eg.execute-api.eu-west-1.amazonaws.com/dev/todo
  GET - https://2vwe9db8eg.execute-api.eu-west-1.amazonaws.com/dev/todo
  GET - https://2vwe9db8eg.execute-api.eu-west-1.amazonaws.com/dev/todo/{id}
functions:
  create: sls-typescript-todo-api-with-dynamodb-dev-create
  getAll: sls-typescript-todo-api-with-dynamodb-dev-getAll
  getById: sls-typescript-todo-api-with-dynamodb-dev-getById
layers:
  None
Serverless: Run the "serverless" command to setup monitoring, troubleshooting and testing.

Step six - The AWS Serverless dashboard.

Now that your Serverless functions have been deployed you probably want to start interacting with them ?

Conveniently AWS provides a dashboard for all of their services.

We will start by once again going back to [https://console.aws.amazon.com/](https://console.aws.amazon.com/).

Search for Lambda

You will now be presented with all of the functions that were deployed as part of the Serverless deployment process, For now you will navigate to the `sls-typescript-todo-api-with-dynamodb-dev-create` function.

This panel contains of the information relevent to this particular function, When a lamda function is deployed it is assigned a random DNS name that anyone can access to trigger the lamda funcion. All traffic in and out of lamda functions is proxied through `Api Gateway`.

As such click on the `Api Gateway` asset linked with the `Designer` panel.

Below the primary designer panel you will now see the `Api Gateway` Metadata. Here is a public link for the lamda function you have just deployed.

Much like when you ran the lamda locally you can now POST to this endpoint to create a new TODO.

Please note your DNS name will be different.

POST
URL
https://2vwe9db8eg.execute-api.eu-west-1.amazonaws.com/dev/todo

BODY
{
	"text" : "This is the todo text"
}

Step seven - But wait where are we storing all of the data

You may be wondering, Where the hell are we saving and fetching the todo entries.

And the answer to that is we are storing all of the records in `DynamoDB`, The Serverless framework allows you to declare and create `DynamoDb` tables when deploying your Serverless functions.

We have declared a basic `DynamoDb` table within a separate CloudFormation file that is imported by the core `serverless.yaml` file and invoked when deploying the declared application.

The declaration and configuration of a `DynamoDb` table is beyond the scope of this tutorial but you can view the configuration by going to the below file.

./packages/ecom-lamda-typescript/serverless-paritals/dynamodb-todo.yaml

Much like with the deployed Lambda functions you can view, edit and update `DynamoDb` tabled using the AWS console.

We will start by once again going back to

Search for dynamodb

You should now see a list of all of the DynamoDB tables that are available to you. You can click on our tutorial table.

Now you can manage the deployed table, You can query and update all of the record within the database from the `DynamoDB` table.

Click to view full article

Microservices are not always the solution

Before we start, what is the official definition of 'mircoservices'.

Micro-services - also known as the Microservices architecture - is an architectural style that structures an application as a collection of services that are:

  • Independently deployable
  • Loosely coupled
  • Organized around business capabilitiesMicro-services
  • Owned by a small team

The micro-service architecture enables an organisation to deliver large, complex applications rapidly, frequently, reliably and sustainably - a necessity for competing and winning in today’s world.

As developer's we always want to build the best solution possible, and to the end we often take inspiration from others and stand on the shoulders of giants.

The concept of Microservices is nothing new and has become a hugely popular architectural pattern for most companies (2023) large and small alike, however throughout my tenure as a developer i've often wondered why the design choice is so prevalent across all organisations.

If your a large multi-national organisation with 20+ engineering teams spread across multiple functions then the idea of decoupled and isolated services is a sound one.

But if your a smaller organisation with a smaller set of multi-functional teams I questions the value of having potentially hundreds of Microservices.

From an design perspective the idea seem's very tempting and rightly so, Teams can iterate independently and have small releases that can be tested in isolation.

But when you look at this longer term the benefits start to become less apparent, when you can many more services than teams most of the services start to become unloved and orphaned leading to operation problems.

You start to loose the domain knowledge around the services and it's function which is often far more important that the actual code within the service. Reading code is easy but understanding it's context often takes a lot more time. Especially if that services is being called by multiple other services that your not aware of.

Now this isn't to say that Miroservices are bad, quite the opposite they are a fundamental design pattern that allow teams to move quickly at scale.

But I think for smaller organisations they are often not the best choice.

Monoliths have become a thing of hatred and in some situations this is more than valid. However they often would service an organisation better than Microservices.

But when I refer to monoliths I'm actually referring more to domain based services where related entities with a related context are grouped together in a singular service.

A lot of the design decisions that I've seen taken over the last 5 years have been geared towards the shiny new technologies and design patterns offered by cloud providers such as AWS Lambda and GCP cloud functions. I love these services just as much as the next engineer but I feel like teams have started to wedge them into every application under sun just so they can get exposure to the services.

Anyway TLDR if your a large organisation with loads of teams then Microservices are the way to go, but if your a smaller organisation then your likely not going to benefit from most of the advantages offered by micro-services and you should pick a more domain based architecture that matches your organisational structure.

Click to view full article

Manually installing Kubernetes 1.10 on Proxmox.

This guide assumes that you already have an operational Proxmox instance.

Step One - Creating the vms

Now you have your host networking setup your ready to create your virtual machines, for this setup we will be creating a cluster of 3.

The layout will be,

node-01 10.20.30.101
node-02 10.20.30.102
node-03 10.20.30.103

We will be using ubuntu 16.04 for each node, to start ssh into your main Proxmox box and download the Ubuntu ISO the Proxmox template directory, This is so Proxmox can see the ISO to mount it.

cd /var/lib/vz/template/iso
wget http://releases.ubuntu.com/16.04.1/ubuntu-16.04.1-server-amd64.iso?_ga=1.150784255.852713614.1480375703
mv ubuntu-16.04.1-server-amd64.iso?_ga=1.150784255.852713614.1480375703 ubuntu-16.04.1-server-amd64.iso

You will now be able to select this ISO when you create your VMS.

Now login to your Proxmox web-ui https://{MAIN_IP}:8006/ with your system credentials

  • Input the hostname, e.g node-01
  • For OS select linux 4.X/3.X/2.6
  • For CD/DVD select from the ISO select menu your downloaded ISO
  • For Hard Disk i recommend 200GB per VM, Space permitting.
  • For CPU use 1 core, Cpu spec permitting.
  • For Memory 4GB, Again memory permitting.
  • For Networking select NAT mode
  • Then confirm.

Now you will also need to add one more network adapter, This adapter will utilize the bridge we created in the previous section.

  • Select the new VM from the "Server View"
  • Find the Hardware option
  • Select "Add" and select "Network Device"
  • We need a "Bridged mode" interface, select bridge vmbr0.
  • Change the "Model" to Intel E1000. These are issues with the standard virtualised network drivers.
  • Now you can turn on your VM.

Install ubuntu as you normally would, By be sure to use the NAT adapter when you install. We will configure the Bridge adapter later.

You will need to repeat this step three times for each VM. Or you can create a template from the first VM you created and clone it three times.

Step Two - Configure VM Network

Once ubuntu is installed you will need to setup the networking for each VM. Connect to node-01 with VNC from the web-ui and login.

Next you will need to configure the adapters, Open up /etc/network/interfaces

vim /etc/network/interfaces

auto ens18
iface ens18 inet dhcp

Your NAT adapter should have already been configured for you, We will now add the Bridged adapter. Add the below to the config.

vim /etc/network/interfaces

auto ens19
iface ens19 inet static
        address 10.20.30.101
        netmask 255.255.255.0
        gateway 10.20.30.1

Now restart the networking service

sudo serivce restart networking

Try to ping the Hertzer host

ping 10.20.30.1

If you are able to ping the host then !! It worked, Your Virtual machine is connected to the main host via the network bridge with its own adapter.

You can confirm this by connecting to the new VM with an SSH tunnel through the hertzer host

ADDRESSS = 10.20.30.101 OR 10.20.30.102 or 10.20.30.103
ssh -A -t root@{MAIN_IP} ssh -A -t {VM_USER}@{ADDRESSS}

You will need to first input the hertzer hosts password then your new VMS credential. if everything was setup properly then you should be able to SSH into your new VM.

This model uses the NAT adapter to connect to the internet, But you could remove the NAT adapter and just use the private network and use 10.20.30.1 as the network gateway.

You will need to repeat this for each of the VMs assigning each VM a diffrent IP.

node-01 10.20.30.101
node-02 10.20.30.102
node-03 10.20.30.103

You now have a 3 node VM cluster connected via a private network that you can ssh into.

Step Three - Install Kubernetes

Now you will need to install Kubernetes on each of your nodes, So for each VM repeat this process.

Add the Kubernetes repo to your sources list

vim /etc/apt/sources.list.d/kubernetes.list

deb http://apt.kubernetes.io/ kubernetes-xenial main

Add public key of "Google Cloud Packages Automatic Signing Key"

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3746C208A7317B0F

Load in the new repo list

apt-get update

Install base packages

apt-get install -y docker.io kubelet kubeadm kubectl kubernetes-cni

Once these packages are installed we can create our base cluster.

We will use node-01 as the master

ssh -A -t root@{MAIN_IP} ssh -A -t {VM_USER}@10.20.30.101

On the master we can used the installed kubeadm tool

kubeadm init

This will take several minutes to configure, Once the process finished you will be given a list of detailed that you MUST SAVE.

{VM_USER}@node-01:~# kubeadm init
<master/tokens> generated token: "7fa96f.ddb39492a1874689"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
<master/apiclient> all control plane components are healthy after 23.098433 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 10.034029 seconds
<master/discovery> created essential addon: kube-discovery, waiting for it to become ready
<master/discovery> kube-discovery is ready after 30.44947 seconds
<master/addons> created essential addon: kube-proxy
<master/addons> created essential addon: kube-dns

Kubernetes master initialised successfully!

You can now join any number of machines by running the following on each node:

kubeadm join --token 7fa96f.ddb39492a1874689 10.20.30.1

The most important command is the kubeadm join command, You need to keep this secret if someone get the token and your master IP they will be able to automatically add nodes to your cluster.

Now install the POD network

kubectl apply -f https://git.io/weave-kube

Becuase we have a small number of nodes we also want to use our master server as a minion

kubectl taint nodes --all dedicated-

Now you are ready to connect your minions nodes to the master, Assuming you have installed the base packes on node-02 and node-3 simply run,

kubeadm join --token 7fa96f.ddb39492a1874689 10.20.30.1

This will configure each node.

To check that the nodes are all checked in run,

kubectl get nodes

NAME      STATUS    AGE
node-01   Ready     7h
node-02   Ready     7h
node-03   Ready     5h

You should be an output like the one above, Congrats you have a kuberntes cluster.

Step Four - Setup kubectl on your local machine

kubectl config set-credentials ubuntu --username={KUBE_USER} --password={KUBE_PASSWORD}
kubectl config set-cluster personal --server=http://{MAIN_IP}:8080
kubectl config set-context personal-context --cluster=personal --user=ubuntu
kubectl config use-context personal-context
kubectl config set contexts.personal-context.namespace the-right-prefix
kubectl config view

Click to view full article

Lets encrypt certificates and Kubernetes

I've recently started to migrate my home network away from Pfsense and over a shiny new Ubiquity Dream machine pro, I can hear the screams of disgust from some of the networking folk already.

Over the past few years I have been running Pfsense at the core of my home network and It's served me extremely well and i've learnt a hell of a lot along the way.

But I'll admin that whilst I loved the feature set provided the pure power and occasional complexity of the features provided was a lot of overhead and simple updates were often a more hassle than I had time for being a new parent as such I decided it as time to bite the bullet and move to something a bit easier to manage. I already use a number of Ubiquiti switches and access points at home so the decison to move over to an entirely Ubiquity based setup was a pretty easy decision.

But I'll openly admit the feature gap between Pfsense and the Ubiquiti Dream machine pro was something I thought that could be easily mitigated, some things were easy to migrate, other features I decided I could live without. But some features I really missed.

One of the aforementioned nifty features provided by Pfsense was it's built in HaProxy plugin which I previously used to hook up the external pod IP's provisioned from load balanced Kubernetes services, it even included automated ACME certificate provisioning.

Alas now that I have moved over to a Ubiquiti Dream machine pro these nifty features are no longer avaliable.

Introducing [cert-manager](https://cert-manager.io/docs/configuration/acme/)

cert-manager is a native Kubernetes certificate management controller. It can help with issuing certificates from a variety of sources, such as Let's Encrypt, HashiCorp Vault, Venafi, a simple signing key pair, or self signed.

Cert manager allows you to add predefined metadata to a Kubernetes ingress in conjunction with a secondary proxy deployed on your cluster, in this case the proxy of choice is Trafiek.

I will only cover the generation of a new certificate in this article and not the configuration of Trafiek but I may cover this in a follow up article when I get some more time.

I generally use "Lets encrypt' for all of my certificates, I would to use AWS Private certificate Authority but that's a pretty hefty chunk of change and it's hard to beat free.

I have attached a script which can be used to add a new A(AAA records still need to be added) record to a pre-existing AWS R53 hosted zone (with the assumption this is also configured correctly), This is required given we need a chain of trust in order for ACME to provision the certificate.

When cert-manager provisions a new certificate it need's to ensure you are the legitimate owner of the domain as such you can validate your ownership of the domain via either a HTTP get endpoint hosted on the domain you wish to provision the certificate for or via a new DNS txt entry.

We are going to use HTTP validation as it's simpler and the process will automatically clean up and junk you don't need after the certificate has been signed.

The below script will simply (as mentioned above) create a new R53 A record for you, then it will create a new issuer resource which will be used to request a new certificate and then finally create a new certificate using your provided DNS name.

With "lets encrypt" you can also test your changes via their dedicated test infrastructure as requests against the production endpoints are rate limited and repeated attempts against the same domain will result in an 36 hour lockout.

#!/bin/sh
# This script will create a certificate for a subdomain
# ./certificate-domain.sh staging 1.1.1.1  Z03501612HD7E79ETXM8B superdomain.com
# ./certificate-domain.sh production 1.1.1.1 superdomain.com
staging="https://acme-staging-v02.api.letsencrypt.org/directory"
production="https://acme-v02.api.letsencrypt.org/directory"
echo $staging
echo $production
if [ $1 == "staging" ]
then
  domain=$staging
else
  domain=$production
fi
if [ -z $4 ]
then 
  fulldomain=$3
  fulldomainString=$3
else
  fulldomain=$4.$3
  fulldomainString=$4-$3
fi
echo "full domain is $fulldomain"
echo "We will be creating the following certificate $domain"
echo "Curate R53 config for $fulldomain"
recordSet=$(cat <<EOF
{
  "Comment": "A new record set for the zone.",
  "Changes": [
    {
      "Action": "CREATE",
      "ResourceRecordSet": {
        "Name": "$fulldomain.",
        "Type": "A",
        "TTL": 600,
        "ResourceRecords": [
          {
            "Value": "$2"
          }
        ]
      }
    }
  ]
}
EOF
)
cat <<< $recordSet > r53-staging-request.json
echo "Generate a certificate for $fulldomain"
aws route53 change-resource-record-sets --hosted-zone-id Z03501612HD7E79ETXM8B --change-batch file://r53-staging-request.json
echo "Generate certificate issuer for $fulldomain"
cat <<EOF | kubectl create -f -
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt-$fulldomainString-$1-issuer
  namespace: default
spec:
  acme:
    # The ACME server URL
    server: $domain
    # Email address used for ACME registration
    email: domain@name.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-$1-issuer
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: traefik
EOF
echo "Generate certificate for $fulldomain"
cat <<EOF | kubectl create -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: $fulldomainString-$1
  namespace: default
spec:
  secretName: $fulldomainString-$1-certificate
  issuerRef:
    name: letsencrypt-$fulldomainString-$1-issuer
  commonName: $fulldomain
  dnsNames:
  - $fulldomain
EOF

I have also spun up a new Github repo with the

Click to view full article

Installing Debian and Proxmox on a Hetzner server.

For this example i shall be using a dedicated server from Hertzer https://www.hetzner.de/en/. A shout out to Hertzer if your looking for cheap and beefy dedicated hosting then these guys are your best bet.

Setting up the Hertzer server

This guide assumes your server has Debian 8 (Jessie installed)

Config when tested

Intel Core i7-920

HDD2x HDD 2,0 TB SATA EnterpriseRAM

6x RAM DDR3 8192 MB = 42 GB

Step one - Install Proxmox

You will begin by creating a new apt source for Proxmox

vim /etc/apt/sources.list.d/proxmox.list

Once you have added the new apt soource you will add the repo key

wget -O- "http://download.proxmox.com/debian/key.asc" | apt-key add -

You will now need to update the system repos

apt-get update && apt-get dist-upgrade

Now you will need to install the Proxmox VE kernel

pve-firmware pve-kernel-4.4.8-1-pve pve-headers-4.4.8-1-pve

Now reboot the machine to load in the new kernel,

Once the machine is in an up state you can install core the main Proxmox application.

apt-get install proxmox-ve

Once again reboot your machine, When the machine is once again in an upstate you will be able to access the web-ui for proxmox https://{YOUR_IP}:8006/. You will be able to login with your root credentials.

WHOLLA you have Proxmox installed and running, You you will need to configure all the networking.

Step two - Configure the internal network.

Your eth0 interface should have already been pre-configured

auto  eth0
iface eth0 inet static
  address   PUBLIC_IP
  netmask   255.255.255.192
  gateway   GATEWAY_IP
  # default route to access subnet
  up route add -net NET_IP netmask 255.255.255.192 gw GATEWAY_IP eth0

Now we will need to create an internal network for virtual machines to connect and communicate with, This will be the backbone of the entire cluster. We will accomplish this by creating a linux bridge.

auto vmbr0
iface vmbr1 inet static
    address 10.20.30.1
    netmask 255.255.255.0
    bridge_ports none
    bridge_stp off
    bridge_fd 0
    post-up iptables -t nat -A POSTROUTING -s '10.20.30.0/24' -o eth0 -j MASQUERADE
    post-down iptables -t nat -D POSTROUTING -s '10.20.30.0/24' -o eth0 -j MASQUERADE

Now that we have the interfaces configured we need to configure the host server to act as our router, As such we need to make sure the kernel has packet forwarding enabled.

vim /etc/sysctl.conf

Only alter/uncomment these lines


net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1

Lastly you will need to ensure we dont send ICPM redirect messages

vim /etc/sysctl.conf

I need to give a shout out to https://blog.no-panic.at/2016/08/09/proxmox-on-debian-at-hetzner-with-multiple-ip-addresses/ this helped me massivly trying to figure this out

net.ipv4.conf.all.send_redirects=0

Finally reboot the host and your good to go !

Step three - Creating some virtual machines.

Now you have your host networking setup your ready to create your virtual machines, for this setup we will be creating a cluster of 3.

The layout will be,


node-01 10.20.30.101
node-02 10.20.30.102
node-03 10.20.30.103

We will be using ubuntu 16.04 for each node, to start ssh into your main Proxmox box and download the Ubuntu ISO the Proxmox template directory, This is so Proxmox can see the ISO to mount it.

cd /var/lib/vz/template/iso
wget http://releases.ubuntu.com/16.04.1/ubuntu-16.04.1-server-amd64.iso?_ga=1.150784255.852713614.1480375703
mv ubuntu-16.04.1-server-amd64.iso?_ga=1.150784255.852713614.1480375703 ubuntu-16.04.1-server-amd64.iso

You will now be able to select this ISO when you create your VMS.

Now login to your Proxmox web-ui https://{MAIN_IP}:8006/ with your system credentials.

- Input the hostname, e.g node-01 - For OS select linux 4.X/3.X/2.6 - For CD/DVD select from the ISO select menu your downloaded ISO - For Hard Disk i recommend 200GB per VM, Space permitting. - For CPU use 1 core, Cpu spec permitting. - For Memory 4GB, Again memory permitting. - For Networking select NAT mode - Then confirm

Now you will also need to add one more network adapter, This adapter will utilize the bridge we created in the previous section.

- Select the new VM from the "Server View" - Find the Hardware option - Select "Add" and select "Network Device" - We need a "Bridged mode" interface, select bridge vmbr0. - Change the "Model" to Intel E1000. These are issues with the standard virtualised network drivers. - Add

Now you can turn on your VM.

Install ubuntu as you normally would, By be sure to use the NAT adapter when you install. We will configure the Bridge adapter later.

You will need to repeat this step three times for each VM. Or you can create a template from the first VM you created and clone it three times.

Step four - Configure VM Network

Once ubuntu is installed you will need to setup the networking for each VM. Connect to node-01 with VNC from the web-ui and login.

Next you will need to configure the adapters, Open up /etc/network/interfaces

vim /etc/network/interfaces

auto ens18
iface ens18 inet dhcp

Your NAT adapter should have already been configured for you, We will now add the Bridged adapter. Add the below to the config.

vim /etc/network/interfaces

auto ens19
iface ens19 inet static
        address 10.20.30.101
        netmask 255.255.255.0
        gateway 10.20.30.1

Now restart the networking service

sudo serivce restart networking

Try to ping the Hertzer host

ping 10.20.30.1

If you are able to ping the host then !! It worked, Your Virtual machine is connected to the main host via the network bridge with its own adapter.

You can confirm this by connecting to the new VM with an SSH tunnel through the hertzer host

ADDRESSS = 10.20.30.101 OR 10.20.30.102 or 10.20.30.103
ssh -A -t root@{MAIN_IP} ssh -A -t {VM_USER}@{ADDRESSS}

You will need to first input the hertzer hosts password then your new VMS credential. if everything was setup properly then you should be able to SSH into your new VM.

This model uses the NAT adapter to connect to the internet, But you could remove the NAT adapter and just use the private network and use 10.20.30.1 as the network gateway.

You will need to repeat this for each of the VMs assigning each VM a diffrent IP.

node-01 10.20.30.101
node-02 10.20.30.102
node-03 10.20.30.103

You now have a 3 node VM cluster connected via a private network that you can ssh into.

Click to view full article