homelab vs the cloud

Homelabs vs. the Cloud: Rediscovering Hands-On Engineering for Modern Teams

“The cloud gives us power and reach. The homelab gives us freedom and control.”

 

Part 1: Why Every Engineering Team Should Have a Homelab (Even in the Cloud Era)

 

⚙️ The Case for Hands-On Infrastructure

In today’s cloud-first world, it’s easy to forget how much is abstracted away. Spin up a Kubernetes cluster in three clicks. Deploy to a managed Postgres service. Push logs to a hosted dashboard. Done.

But while this power and convenience are essential for building at scale, they come with a cost: reduced visibility, limited experimentation, and slower skill development.

That’s why – despite having access to enterprise-grade cloud environments – I built a homelab.

Seventeen years ago, I was introduced to virtualization talking to the infrastructure engineer over a coffee. I remember I told him I would like to experiment with some new technology, but I was afraid of installing it on my own computer as it would add many other dependencies, and it could crash my development environment. The task deadlines and the highly restricted environment left a very little room for experimentation, and it would be risky as the company had their own bare metal server. I could compromise it all. Trying new tech, even in the test environments, was unthinkable. That was when I first heard of VirtualBox. And I fell in love with it.

Virtualization and virtual machines helped shape the professional I am and even though virtualization improved much over the years, for development purposes VirtualBox, Hyper-V (WSL, is that you?) or KVM do the job quite well. In today’s tech stack I don’t use as many VMs as I was used to, but Docker is part of my day-to-day work, and thanks to VirtualBox (and later Xen Project) I understand the “why’s” and “how’s” in the complex cloud world we live in today, and that reflects also on my local development environment.

As I said, I started small – 17 years ago, 8GB DDR3 and my Pentium 4 HT could do a lot. I had few VMs configured, running Fedora or Ubuntu, with JBoss 4 serving my application in one VM and my database in another and when incidents were raised or some change on the database had to be done, I could just replace it by another VM, without affecting my local environment. Time saver.

Nowadays, Docker, Podman, containerd, LXC or any OS level virtualization software do all of it for us. We have more processing power and more memory in our computers, so it’s easy to spin up a complex environment with few commands within seconds. However, the ecosystem can grow, it’s inherent in many problems we solve every day as the components get smaller and require external parties to work, such as microservices. A frontend application that was developed using prototype.js and scriptaculous.js back then, today needs Nodejs and Angular.js and at least Nginx to be served properly. Java applications are now self-sufficient, without the need of an external web server or application server, though their components are scattered, split up in many downstream services.

So my needs required another approach. Learning new tech, solving my first-world problems (my pictures folder is too big, my ebook collection is not organized, the documents are a mess, where is that bookmark I added last week to read later? and the list goes on…) while learning a new stack, playing videogames, storing my digital life, my documents, my experimentations, my roast beef, pasta and cookies recipes… all of it couldn’t live anymore in the same environment.

It was the moment I decided to build my first homelab. Other motivations included privacy, control over my own data as we hear about leaks, ransomware and other threats out there, services going dark, unavailability, and more…

 

My first homelab was quite an old laptop I had that I was using mostly for the web. I cleaned it up, reinstalled the OS, added Docker and plugged it into my network. The initial configuration was easy, I added some Docker containers behind Traefik, one service here and there, backups of github repositories and it seemed to solve the problems I was facing. Well, the thing is – I saw the potential it had, and I started… experimenting. And it grew.

Today, it’s a fully virtualized environment with its own DNS, object storage, media server, reference wiki, note taking apps, backup workflows, gitops, and users.

More importantly, it’s a space where I can try things I can’t in the cloud – whether it’s running a fully offline environment, simulating a catastrophic scenario, or testing a new Hyperledger on a live VM.

 

☁️ Cloud at Work vs. Homelab at Home

In production, teams benefit from:

  • Virtually infinite compute and storage.
  • Enterprise support and SLAs.
  • Secure-by-default managed services.
  • Policy enforcement, audit trails, and access control.

 

But they’re often limited by:

  • Cost visibility (experiments have real price tags).
  • Permission boundaries and risk aversion.
  • Infrastructure-as-a-Service walls – you can use them, but can’t touch them.

 

In contrast, a homelab gives you:

  • Full-stack control, from BIOS to DNS.
  • Total freedom to experiment (and break things).
  • No billing anxiety for spinning up 10 VMs or reindexing a TB of data.
  • The curiosity to learn how to connect the dots of the diagram your Architect put up to solve some problem, and the alternative solutions you discovered while exploring them.

 

The result? A richer, deeper learning environment — ideal for developers, platform engineers, and even CTOs who want to stay close to the metal.

 

Homelab vs Cloud: Not Opponents, But Complements

 

Feature

Cloud

Homelab

Scale

Infinite

Constrained by power/budget

Experimentation

Risk-managed, cost-tracked

Unlimited freedom to try/fail

Observability

Managed, abstracted

Fully built and owned

Security

Centralized, automated

You design the policies

Cost Control

Pay-as-you-go (often expensive)

Fixed upfront hardware investment

Ownership

Vendor-dependent

Full local control

Learning

Tool-focused (YAML, APIs)

Full-stack (network, OS, services)

 

While cloud offers homogenous environments, homelab brings the challenge to work with limitations that also are present on an enterprise environment, but in different ways. If I wanted to introduce a RDBMS to my homelab stack, I would think twice if I would add another one just for the sake of connectivity. I would choose MySQL or PostgreSQL and work all my microservices using the same system. And that’s actually a good thing.

If you think about a company trying to put a new service in the market that has a lot of different components, the teams working on different subdomains often come up with similar solutions, but using different components.

While some of those components make sense, others not much. For instance, it’s a good practice to have a database per microservice, however, if one microservice uses MySQL, another uses PostgreSQL, yet another uses MariaDB, then we have a problem maintaining all of it. More dashboards to look at, more alerts, more observability concerns, more logs… The devs stop doing what they best do – develop – and start doing environment maintenance instead.

The cost for the company is huge in at least two ways:

  • too many services to keep (often with licenses), VMs, managed services – which are expensive on their own;
  • people doing extra tasks instead of focusing on what they do the best.

 

That’s where the homelab helps developers to make the right decision or work with the current limitations already present. Technological boundaries should be part of any stack.

Developers are pragmatic, and they would think twice if they have to maintain similar systems on their own.

  • Cloud excels at delivering products fast and reliably.
  • Homelabs excel at teaching the systems behind those products — making developers more versatile, resilient, and curious.

 

Homelabs Are More Than Hobby Projects

They’re:

  • Training grounds for platform thinking.
  • Prototyping labs for internal tools and services.
  • Safe spaces for breaking and fixing things at will.
  • Cheap as the hardware and resources are limited. And they can easily be turned off if the power consumption is increasing. No extra charges.

 

In a world where “infrastructure” increasingly means “some YAML you push to AWS,” homelabs bring back the tactile side of engineering — and create developers who understand not just how things work, but why.

Thinking about the similarities of a homelab with a cloud environment – it is easy to turn any home server into a private cloud server. There are many OS’s out there that simulate the same ecosystem, including plugin stores where you can find your favorite piece of software to store your pictures (like a self-hosted Google Photos alternative) and it would be available within minutes.

But to get there – getting the store set up and installing software – as well as to understand how, where, what, and when the new software is being deployed requires more effort. And it’s fine if you are unsure if I am talking about AWS or NextCloud. That’s exactly the point. The similarities are there, but the understanding lives only on one side.

 

What a Homelab Enables

  • Test cloud-native patterns offline
  • Host self-contained development platforms
  • Explore networking, storage, DNS, and access control
  • Train engineers without risking production
  • Develop a real intuition for system behavior

 

I am sure there are many more things a homelab can do. Now, ask yourself –  how easy it is to achieve that on any enterprise cloud platform?

Quite often companies require innovation, but innovation comes with a cost. High, in many cases.

If the developers have the chance to explore new stacks, new compositions, some ideas they had without hitting permission barriers or resource limitations in an environment where resources are supposed to be limitless, innovation would come naturally. That’s one of the main goals of a homelab.

But aren’t there limitations the same way as there are in the cloud? Yes and no. When you say you can have all the resources out there and the budget is the only limit, but then you hit many other boundaries such as permissions, configurations, services that are not part of the subscription tier etc, you realize that cloud isn’t limitless, and every thought you had for your solution now is compromised. Now, if you already know the limitations of your environment, you can plan ahead of that limitation, work it out and be creative around the limits you have.

 

Team & Org Use Cases

  • Engineering onboarding
  • Platform experimentation
  • CI/CD pipeline prototyping
  • Infrastructure-as-Code training ground
  • Exploring open-source cloud replacements

 

Homelab on… the cloud?

Does that mean every developer should have its own homelab? No, of course not. There are many things to consider before building a homelab.

Before building my own, I asked myself a few key questions that helped me decide if I was going the right way, and of course, thoroughly created some requirements I would like to achieve.

 

My requirements were:

  • I would like a place to store my backups, following the 3-2-1 Backup rule
  • I would like to control my data
  • I would like to have my digital life centralized and organized the way I like, without relying on big players, or paying lots of subscriptions to keep my data online.
  • I would like to have access to my data safely, through a secure channel that I know and trust
  • I would like to tweak the apps I am using for different purposes, such as note taking, photos, documents etc.
  • I would like to create my own services or software while I learn new things in the process of making them available to my loved ones.

 

Not a big deal, right? As for the questions:

  • How much money do I have to invest? Even though it is cheaper than a cloud provider in the long run, it is a significant investment in hardware infrastructure. It doesn’t come close to the enterprise grade servers out there, but still, it’s not the money someone has around to spare in something to use now and then.
  • How much time I am willing to invest working on it or how far I would go to make it work? Are my requirements achievable using an old computer or should I buy something new?
  • How much am I willing to learn to make it work the way I want, with my own domain,  SSL certificates, VMs, sub networks, network zones, domain isolation and all the paraphernalia needed to make it work?
  • How long do I think I’ll keep it up and running?
  • Will I try to make more out of it, maybe transforming it into a media-server, controlling my smart-home, and adding some AI Agents to help me out on some stuff?

 

These were the questions that mattered to me. Of course I answered yes for most of it, and if I compared the cost of a mini server against a cloud provider, in the long run the mini server is cheaper. For perspective, just on storage: storing my data on S3 (around 400Gb between photos, videos and documents) on hot storage, writing them to the cloud, and reading them to check availability and consistency, it would cost me around $20/month. Cold storage for the same amount of data would cost me around 3 USD dollars a month. In about 5 months, I could buy a NAS grade 4TB HDD to store the data on my own. And I would have space to store way more data. And that doesn’t even include some benefits like the experimentation part of the homelab, setting up VMs, containers and whatnot.

Above all, with a homelab, I am in control of what is there.

Most of the advantages a homelab brings can be achieved in a corporate environment too, using free tier components, sticking to cheap environments.

So corporations can provide the same developer experience we have with a homelab. They would need a bit of effort to create an experimentation lab for the devs, similar to what Confluence does with user spaces. That would help developers explore new things, while isolated and sandboxed, and the innovation would come naturally to the teams and companies where testing, staging and production environments are locked down (often for good reasons).

 

️Part 2: The Homelab Build – Hardware, Storage, and Network Foundations

What I built, how I built it, and why I made the decisions I did — from compute and storage to VLANs and DNS.

 

Goals Before Hardware

As I said, before building the homelab I came up with a few requirements I would like to achieve:

  • Backups storage
  • Privacy oriented
  • Centralized and isolated
  • Secure
  • Customizable and have options to try new things
  • Would run my own code

 

Those were not the only ones, after all, I would like to do things myself in some cases. So I added those to the list:

  • Have my own internal domain
  • Available to my guests
  • Maintain upgrade flexibility
  • Be a playground where I could study and break things around

 

Over time the list grew even more as expected and today I am running close to 30 services and I added a NAS to the list.

 

Compute: Power-Efficient but Capable

Lab

  • Minisforum MS-01 (i5 12600H, 32GB RAM, 1TB NVMe)
  • Silent, efficient, and supports full virtualization (VT-d)
  • Powerful enough to run many apps in parallel with a clean installation.

 

Storage: Centralized, Redundant, and Fast

Synology 1522+

  • 2x12TB HDD, 3x20TB HDD (RAID-5) + 1TB SSD cache
  • Shared via NFS, SMB, and S3 (via MinIO)
  • Nightly ZFS snapshots with offsite backup option

 

Networking: Your Own VPC

Router + Firewall

  • ASUS ROG Rapture GT-rea
  • VLANs for Admin, Services, IoT, and Guests
  • Firewall on the router + OS configuration
  • 2.5GbE managed switch
  • Wi-Fi 7 AP with VLAN tagging support

 

DNS & Ingress

  • Pi-hole as DNS resolver
  • Traefik as a dynamic reverse proxy (Let’s Encrypt TLS)

 

Lessons Learned

  1. Power matters: Efficient server beats heavy-duty servers for 24/7 ops
  2. Networking is complex: VLANs simulate real-world boundaries, firewalls are complex to setup
  3. ZFS is invaluable: Snapshots, replication, compression, self-healing
  4. Proxy + DNS = insight: These components are critical in any environment

 

What’s Next

In Part 3, we’ll configure the software stack:

  • Server provisioning, DNS on docker
  • Docker, K3s, and orchestration
  • GitOps and self-hosted CI/CD
  • Authelia for identity + RBAC
  • Prometheus, Grafana, and what metrics to collect

 

Let’s Connect

  • Are you running a homelab?
  • Are you exploring platform engineering or dev infra strategy?
  • Want to prototype internal tooling outside the cloud?

Let’s share ideas. Reach out on LinkedIn, Mastodon, or GitHub.

 

Author:

male software engineer at Zartis Marllon is a Senior Software Engineer with over 20 years of experience in the industry. While his core expertise lies in Java, he also has hands-on experience with Python and Go. His technical skill set includes SQL and NoSQL databases, Docker, Kubernetes, Jenkins, Kafka, and cloud platforms such as AWS, Azure, and GCP. He is well-versed in Microservices and Event-Driven Architecture. Marllon is familiar with agile methodologies and consistently adheres to software development best practices. 

Share this post

Do you have any questions?

Zartis Tech Review

Your monthly source for AI and software news

;