Before these new Dell servers, I had in my lab 4 Intel NUCs which I'm replacing with Dell PE R620. Someone can argue that Dell servers will consume significantly more electrical energy, however, it is not that bad. Single PE R620 server withdraws around 70-80 Watts. Yes, It is more than Intel NUC but it is roughly just 2 or 3 times more. Anyway, 4 x 80 Watt = 320 Watt which is still around 45 EUR per month so I have decided to keep servers Powered Off and spin up them only on demand. Dell servers have out of band management (iDRAC7) so it is easy to start and stop these servers automatically via RACADM CLI. To gracefully shutdown all Virtual Machines and put ESXi hosts into maintenance mode and shutdown them I will leverage PowerCLI. I've decided to use one Intel NUC with ESXi 6.5 to keep some workloads up and running all times. These workloads are vCenter Server Appliance, Management Server, Backup Server, etc. All other servers can be powered off until I need to do some tests or demos in my home lab.
I would like to have RACADM and PowerCLI also up and running to manage Dell Servers and vSphere via CLI os automation scripts. PowerCLI is available as an official VMware docker image and there are also some unofficial RACADM docker images available in DockerHub, therefore I have decided to deploy PhotonOS as a container host and run RACADM and PowerCLI in Docker containers.
In this blog post, I'm going to document steps and gotchas from this exercise.
Photon OS is available at GitHub as OVA, so deployment is very easy.
CHANGE ROOT PASSWORD
The first step after Photon OS deployment is to log in as root with the default password (default password is "changeme" without quotation marks) and change root password.
CHANGE IP ADDRESS
By default, IP address is assigned via DHCP. I want to use static IP address therefore I have to change network settings. In Photon OS, the process systemd-networkd is responsible for the network configuration.
You can check its status by executing the following command:
systemctl status systemd-networkd
By default, systemd-networkd receives its settings from the configuration file 99-dhcp-en.network located in /etc/systemd/network/ folder.
Setting a Static IP Address is documented here.
I have created file /etc/systemd/network/10-static-en.network with the following content
NTP=time1.google.com time2.google.com ntp.cesnet.cz
File permissions should be 644 so you can enforce it by command
chmod 644 10-static-en.network
New settings are applied by command
systemctl restart systemd-networkd
CREATE USER FOR REMOTE ACCESS
It is always better to use regular user instead of root account having full administration rights on the system. Therefore, the next step is to add my personal account
useradd -m -G sudo dpasek
-m creates the home directory, while -G adds the user to the sudo group.
Set password for this user
The next step is to edit the sudoers file with visudo. Search for %sudo and remove the ‘#’ from that line. After that, you can log in with that account and run commands like a root with ’sudo
tdnf install sudo
as described later in this post.
DISABLE PASSWORD EXPIRATION
If you want to disable password expiration use command chage
chage -M 99999 root
chage -M 99999 dpasek
Photon OS by default blocks ICMP, therefore you cannot ping from outside. Ping is, IMHO, very essential network tool for troubleshooting, therefore it should be always enabled. I do not think it is worth to disable in the sake of better security. Here are commands to enable ping ...
iptables -A INPUT -p ICMP -j ACCEPT
iptables -A OUTPUT -p ICMP -j ACCEPT
iptables-save > /etc/systemd/scripts/ip4save
UPDATE OS OR INSTALL ADDITIONAL SOFTWARE
Photon OS package manager is tdnf, therefore OS update is done with command ..
if you need to install additional software you can search for it and install it
I have realized there is no sudo in the minimal installation from OVA, therefore if you need it, you can search for sudo
tdnf search sudo
and install it
tdnf install sudo
START DOCKER DAEMON
I'm going to use Photon OS as a Docker host for two containers (PowerCLI and RACADM) therefore I have to start docker daemon ...
systemctl start docker
To start the docker daemon, on boot, use the command:
systemctl enable docker
ADD USER TO DOCKER GROUP
To run docker command without sudo I have to add linux user (me) to group docker.
usermod -a -G docker dpasek
POWERCLI DOCKER IMAGE
I already wrote the blog post how to spin up of PowerCLIcore container here. So let's quickly pull PowerCLIcore image and instantiate PowerCLI container.
docker pull vmware/powerclicore
Now, I can remotely log in (SSH) as a regular user (dpasek) and run any of my PowerCLI commands to manage my home lab environment.
docker run --rm -it vmware/powerclicore
Option --rm stands for "Automatically remove the container when it exits".
To work with PowerCLI following commands are necessary to initialize PowerCLI configuration.
Set-PowerCLIConfiguration -Scope User -ParticipateInCEIP $true
The configuration persists within each container session, however, it disappears when the container is removed, therefore it is better to instantiate container without -rm option, configure PowerCLI configuration, keep the container in the system and start container next time to perform any other PowerCLI operation.
docker run -it -v "/home/dpasek/scripts/homelab:/tmp/scripts" --name homelab-powercli --entrypoint='/usr/bin/pwsh' vmware/powerclicore
Option --name is useful to set the name of the instantiated container because the name can be used to restart container and continue with PowerCLI.
Inside the container, we can initialize PowerCLI configuration and use all other PowerCLI commands, scripts and eventually exit from the container back to the host and return back by command
docker start homelab-powercli -i
In such approach, the PowerCLI configuration persists.
RACADM DOCKER IMAGE
Another image I will need in my homelab is Dell RACADM to manage Dell iDRACs. Let's install and instantiate the most downloadable RACADM image.
docker pull justinclayton/racadm
I would like to store all my home lab scripts in GitHub repository, synchronize it with my container host and leverage it to manage my home lab.
# install Git
sudo tdnf install git
# configure Git
git config --global user.name "myusrname"
git config --global user.email "email@example.com"
git clone https://github.com/davidpasek/homelab
# save Git credentials
git config credential.helper store
RUN POWERCLI SCRIPT STORED IN CONTAINER HOST
In case, I do not want to use PowerCLI interactively and run some predefined PowerCLI scripts then local script directory has to be mapped to the container as shown in the example below
docker run -it --rm -v /home/dpasek/scripts/homelab:/tmp/scripts --entrypoint='/usr/bin/pwsh' vmware/powerclicore /tmp/scripts/get-vms.ps1
The option -rm is used to remove the container from the system after the PowerCLI script is executed.
The option -v is used to do the mapping between container host directory /home/dpasek/scripts/homelab and container directory /tmp/scripts
I was not able to run the PowerCLI script directly with docker command without the option --entrypoint
The whole toolset is up and running so the rest of exercise is to develop RACADM and PowerCLI scripts to effectively managed my home lab. The idea is to shut down all VMs and ESXi hosts when the lab is not needed. When I will need the lab, I will simply power on some vSphere Cluster and VMs within these clusters having vSphere tag "StartUp".
I'm planning to store all these scripts in GitHub repository from two reasons
- GitHup repository will be used as a backup solution
- You can track the progress of my home lab automation project