Welcome back to another state of the homelab post. Since the last post from 2017 many things have changed. So let’s dive in right away.
Complete Overview
The following hardware is in use (top to bottom):
| Hardware | Description |
|---|---|
| Unifi US-16-XG (in the back) | Coreswitch and therefore the backbone of my network. |
| 1u 24 Port Patch Panel | Connecting the distribution-switch with servers/rooms |
| Unifi US-24-250W | Distribution Switch to the rooms and my management interfaces of the servers |
| 1u 24 Port Patch Panel | Connecting the distribution-switch with servers/rooms |
| KMM Unit | Noname Still with PS2 and VGA |
| HP DL120 G7 | pfsense 2.4.4 E3-1270 2 GB RAM 30 GB SSD |
| Self Built Storage Host | Main fileserver FreeNAS 11.2 2x E5-2648L 120 GB RAM 2 Pools: - VOL01: 8x 8 TB RaidZ2, 2x 100GB SLOG - VOL02: Mirrored vDevs with 8 HDD pairs of different sizes |
| Dell R510 | Backup fileserver: Running only once a week for a snapshot of the main fileserver FreeNAS 11.2 1x L5520 16 GB RAM 1 Pool: - VOL01: 2x 4x 250GB RaidZ1 |
| DELL R720 | Hypervisor ESXi 6.7 2x E5-2690 196 GB RAM Raid 1 with 2x 120 GB HDDs for ESXi Raid 5 with 4x 300 GB HDDs as a local datastore |
| Dell R620 | Windows Hyper-V Not running right now |
| Dell 2700R | 2700 Watt UPS for powering most of my rack |
The setup pulls about 400-500 Watts. My goal is to not go over 500 Watts.
Cleaning up the Rack and a Coreswitch Replacement
Or at least this was the plan. Over the one year everything was running the servers got dusty and the cable management awful. Since I was not able to open the KVM/KMM completely and I bought a new coreswitch, I decided to not only redo the cable management but also remove every device in the rack. With this I was able to redo the whole cabling as well as cleaning up every bit of dust in the rack. To prevent dust getting in my rack in the future I bought some cheap noise dampening mats which allow enough air to flow through and filter the incoming air. While I was redoing everything I also added labels and documented everything that needed documentation.
Speaking of documentation, you might have seen the network map I added to the gallery in this post. It is one of the things I wanted to do for a long time, but never did because of time constraints.
New UPS
Since my HP UPS had quite a few years on its back I was looking for a bigger, newer and all around better UPS.

Dell 2700R UPS
I was able to snag a Dell 2700R which has enough juice to power everything I have running at the moment. However, since the UPS has a C20 inlet I had to get a new power outlet.
After getting in contact with the electrician in town we got to the conclusion, that it would be the cheapest to exchange the current three T13 outlets with two T23 outlets. So no change in the circuitry. For now, this is a better solution than replacing/adding a new circuit that is shielded by a 16A fuse. For the moment I only have 10A, which does work, but might not be enough in the future if I decide to expand.
FreeNAS Host Replacement
After not being satisfied with my old FreeNAS host (DL180 G6) I went online and searched for an alternative with more space for Harddrives/SSDs and if possible, less power-hungry. To achieve less power usage I did want to find a motherboard with at least an LGA2011 socket. I was also in need of a new case since the old one uses a proprietary motherboard.

ICY DOCK MB996SP-6SB 6-Bay 2.5″
After quite sometime of surfing on different marketplaces I found a cheap, nice case with space for 16 Drives and a 5.25″ slot where I wanted to put an 6×2.5″ HDD mount in, which I already had laying around. After winning the auction, 1 and a half hours of drive and 20 CHF less in my pocket I got myself the new case.
I first had to thoroughly clean it with a vacuum cleaner and some wet towels. After that I exchanged the fans with some quieter Noctua ones. The previous fans where 3-Pin fans, so this was, what I got as a replacement. In my case I got two different models, two NF-R8 redux 1800 and three NF-R8 redux 1200. Over all they are very quiet and I am happy with them.
Update: since I changed to 8TB platters the server is getting pretty toasty, so I changed the fans to 5 of the NF-R8-redux 1800.
So now I had a case but was missing the actual computing parts to make my new case usable. So I went ahead and contacted a seller I previously bought server stuff off of. Lucky me was able to get a Supermicro X9DRD-7LN4F and 2xE5 L processors for 415 CHF. The motherboard is great. It has enough space for DDR3 RAM and the 2 SAS ports with a Broadcom 2308 chip can be flashed to IT mode. In my opinion, this is the best feature about this motherboard and is the reason why I bought it.
I also bought M.2 PCIe SSD adapters. These are not in use, since my budget does not allow me to buy 2 good M.2 SSDs right now. However I will buy 2 Samsung ones in the near future. In the meantime I already bought two S3700 100GB SSDs used as SLOG devices in RAID1. In my usecase they are for speeding up transfers over NFS.
Now I had everything I needed for my new fileserver. Well at least that was, until I noticed, that I did not have any space for the OS. So I went ahead and bought 2 USB sticks for a mirrored Boot Partition. They are well-built and have not failed on me since the beginning of the year 2018.
Better Wireless and a new Switch in the Bureau
For a long time my WiFi was really crappy. This came from the cheapo router I used in AP mode. The device only had 100Mbps Ports and Wireless N. So to get faster WiFi-Speeds I finally got myself a new Unifi UAP-AC-Pro. I got a Ubnt device since I almost exclusively read good things over at /r/homelab. The unit looks great and feels sturdy. I coupled the AP with a new Unifi Switch. To power the AP and in the future a Network Cam over PoE i got the 8 Port 60 Watt one.
After I got the Hardware I started by installing a VM on my ESXi Host. I already got the first hiccup. Normally I am using CentOS 7, but the unifi controller is not available from Ubnt directly. Sure I could have gotten away with an Unofficial build, but i thought it will be a much bigger hassle, since the software has to be updated by hand, AFAIK. This was a deal breaker for me , so I quickly spun up a Ubuntu 18.04 VM. The install is in my opinion really easy and was done in under 15 Minutes.
“For what do you need a VPS?”
Well I have some small projects, which I do not want to host at home. One of them being this blog.
Also I am hosting some small websites for some of my colleagues. This gives me the opportunity to learn and manage live systems. The VPS is running webmin, which allows me an easy management and my user-base fast deployment and their own domain management.
Another thing I use the VPS for is VPN. I have a OpenVPN service running so I have a VPN in Germany with good speeds, which sadly is not the case on my home connection.
I changed from Digital Ocean in 2017 to Hetzner, they are cheaper and give me better hardware. So far I was not disappointed by my provider-change.
“What are you running on your Homelab?”
I hear you asking. Let’s see:
| VM Name | Specifications | Description |
|---|---|---|
| DC01 | 2 vCPUs 4 GB RAM 40 GB HDD | This is one of my very few Windows Server Machines. It runs my Active Directory, for global authentication. It also serves as a printserver and a DHCP server. |
| UNIFI01 | 1 vCPU 4 GB RAM 16 GB HDD | As the name already suggests, this is the Unifi controller software I use for configuring my Ubiquiti gear in and around my homelab. This machine runs Ubuntu since there is no official CentOS version of the unifi controller. |
| PLX01 | 8 vCPUs 8 GB RAM 16 GB HDD | This is my plex media server which is running on CentOS 7 and the newest version of plex. The machine gets the data from some NFS shares on the new FreeNAS host. I do not get more than 2 streams at once, so I have no issues with performance. However the machine is limited by my low upload speed for streams outside of my network. Also running on this machine is Ombi, I can only recommend you to install it on your plex server. It gives you cool data about your Userbase. |
| DL01 | 2 vCPUs 8 GB RAM 16 GB HDD | This is the machine I use to centrally and automatically download my media needs. It is running Ombi, Sonarr, Radarr, Lidarr, Sabnzbd, Deluge and scdl for downloading my likes on soundcloud. This setup is configured to also mount the data directories over NFS. I really like this setup, since it allowed me to easily manage my pletora of movies series and music. |
| SWS01 | 2 vCPUs 6 GB RAM 80 GB HDD | The Spacewalk Server is a vm I do not use anymore since I have done some faults when setting it up. The machine is running out of space and updates are not getting downloaded. I want to replace this setup with Foreman or SCCM. I am still unsure on which solution I should jump on. |
| FOREMAN01 | 2 vCPUs 12 GB RAM 80 GB HDD | Foreman01 is a testserver which I am trying out Foreman. However I still was not able to completely configure it. Time is a scarce ressource and my knowledge is not completely there yet. |
| HASS01 | 2 vCPUs 4 GB RAM 16 GB HDD | Since I wanted to get in touch with home automatisation I went ahead and installed Home Assistant on a VM. For now it does only two things, one beeing turning on and off my lights and two beeing to monitor who is at home. |
| VPC01 | 4 vCPUs 8 GB RAM 64 GB HDD | VPC standing for Virtual Private Computer. It is for random stuff that needs a windows operating system. The machine is connected to my Active Directory. The Appliance is mostly used for management and using handbrake to reencode movies to x265. |
And now that you have read this, you will say: “Why do you have so much hardware. You could do the exact same on a NUC”, and you would be right. I for my part like it, that I can spin up and down VMs without running out of ressources in the near future additionally it is a great learning experience working with business-hardware. And the typical homelab argument: “Because I can!”.
Future Changes
- FreeNAS M.2 SSDs for fast VM storage
- Change from ESXi to HyperV (eventually) because of the licensing of unlimited Windows Server VMs
- Automated installation of VMs with foreman or SCCM and thus the removal of the outdated Spacewalk server. Depending on what I use I will create a guide on here.
- Automated setup of services with Ansible
- Upgrade from CentOS 7 to Ubuntu 18.04 or CentOS 8 (once it is available)
- Getting my feet wet with Docker
- Baremetal Domain Controller, if I have to update my ESXi host
Leave a Reply