Building an OpenShift - OKD 4.X Lab, Soup to Nuts
Note: This tutorial is being deprecated in favor of a new version:
Link To New Tutorial: https://upstreamwithoutapaddle.com/home-lab/lab-intro/
The newest version of my helper scripts is here: https://github.com/cgruver/kamarotos
The archived main branch can be found in the archive
branch of this project and the previous documentation can be found at: Lab Intro
I will not be doing any additional development on this project.
New work can be found on my Blog: Upstream - Without A Paddle
Equipment for your lab
You will need at least one physical server for your lab. More is obviously better, but also more expensive. I have built my lab around the small form-factor NUC systems that Intel builds. My favorite is the NUC10i7FNK. This little machine sports a 6-core 10th Gen i7 processor at 25W TDP and supports 64GB of RAM.
I am also a fan of the NUC10i3FNK. This unit is smaller than the NUC10i7FNK. It sports a dual-core CPU, supports 64GB of RAM and has a single M.2 slot for an SSD. One of these will make a great Bastion Host and development server.
You will either need a router that supports DHCP reservations, TFTP, and PXE or you will set up DHCP & TFTP on a Linux host. Assuming that you already have a home router, you can use that as long as it supports TFTP and PXE configuration. Most consumer WiFi routers do not support TFTP or PXE. However, if you want something portable and awesome, check out the GL.iNet GL-AR750S-Ext. This little guy runs OpenWRT which means that you can use it as a router for your lab network, plus - Wireless bridge, VPN, PXE, Http, DNS, etc… OpenWRT is a very powerful networking distro. There is a new version out now, GL-MV1000. It does not have WiFi, but it is much faster than the GL-AR750S-Ext. I carry the AR750 with me when traveling, and use a pair of the MV1000s in my home lab.
You may also need a network switch, if you don’t have enough 1GB ports available in your router. I am using a couple of Netgear GS110EMX. It’s a great little managed switch with 8 1Gb ports and 2 10Gb ports. The 10Gb ports are really handy if you also have a NAS device that supports 10Gb network speeds.
Optional: NAS device.
In early 2019, I came across this little Frankenstein. The QNAP NAS-Book TBS-453DX. This thing is not much bigger than the NUCi7KYK, (the VHS tape). It has 4 M.2 slots for SSD storage and will serve as an iSCSI server, in addition to all of the other capabilities that QNAP markets it for. The iSCSI server is what caught my eye! This thing completes a mini-datacenter setup. With this device added to my lab, I am able to replicate most of the capabilities that you will find in an enterprise datacenter.
My home lab has grown to be almost embarrassing… but, what can I say, except that I have a VERY understanding wife.
For your own lab, I would recommend starting with the following:
- 1 x NUC8i3BEK - For your Bastion host and development server
- 32GB RAM
- 500GB M.2 SATA SSD
- 1 x NUC10i7FNK - For your Hypervisor
- 64GB RAM
- 1TB M.2 SATA SSD
- 1 x GL.iNet GL-AR750S-Ext - For your router
A minimal setup like this will cost a little less than a 13” MacBook Pro with 16GB of RAM. For that outlay you get 8 CPU cores (16 virtual CPUs), 96GB of RAM, and a really cool travel router!
Check prices at Amazon.com and B&H Photo Video. I get most of my gear from those two outlets.
Once you have acquired the necessary gear, it’s time to start setting it all up.
Follow each of these guides to get setup:
- Bastion Host
- DNS Setup
- Nginx Setup & RPM Repo sync
- PXE Boot with TFTP & DHCP
- Sonatype Nexus Setup
- Build KVM Host/s
When your setup is complete, it’s time to deploy your OKD cluster:
After deployment is complete, here are some things to do with your cluster:
-
Designate your Master Nodes as Infrastructure Nodes
Do Not do this step if you do not have dedicated
worker
nodes.If you have dedicated worker nodes in addition to three master nodes, then I recommend this step to pin your Ingress Routers to the Master nodes. If they restart on worker nodes, you will lose Ingress access to your cluster unless you add the worker nodes to your external HA Proxy configuration. I prefer to use Infrasturcture nodes to run the Ingress routers and a number of other pods.
- Set up Htpasswd as an Identity Provider
- Deploy a Ceph cluster for block storage provisioning
- Create a MariaDB Galera StatefulSet
- Updating Your Cluster
- Tekton pipeline for Quarkus and Spring Boot applications
- Gracefully shut down your cluster