Home Projects Resume About
Back to Projects

Project: Kubernetes Raspberry Pi Cluster

Executive Summary

Built a 3-node Kubernetes cluster using one Raspberry Pi 4 and two Raspberry Pi 5 boards for learning container orchestration and distributed computing. This project involved setting up a complete K3s Kubernetes environment with proper networking via a TP-Link TL-SG2008 V3 managed switch, implementing container deployments, and exploring microservices architecture for home lab applications.
3 Cluster Nodes
32GB Total RAM
K3s Kubernetes Distro
Fall 2024 Completed
Raspberry Pi Cluster Assembly
Network Configuration
Kubernetes Dashboard
Cluster in Operation
Complete Setup
Project Overview

This project started as a deep dive into container orchestration and distributed computing using affordable Raspberry Pi hardware. I wanted to understand how modern cloud infrastructure works at a fundamental level by building my own miniature data center that could run real applications using Kubernetes.

The cluster consists of three nodes: one Raspberry Pi 4 (8GB) serving as the master node, and two Raspberry Pi 5 (8GB each) as worker nodes. All three boards are connected through a TP-Link TL-SG2008 V3 managed switch, providing gigabit ethernet connectivity and VLAN capabilities for network segregation.

Using K3s, a lightweight Kubernetes distribution perfect for edge computing and IoT devices, I was able to deploy a fully functional Kubernetes cluster that runs various containerized applications including web services, databases, monitoring tools, and development environments. The entire setup consumes less than 30W of power while providing a powerful learning platform for DevOps practices.

Components & Materials

Hardware Components

  • 1x Raspberry Pi 4 Model B (8GB RAM) - Master Node
  • 2x Raspberry Pi 5 (8GB RAM) - Worker Nodes
  • TP-Link TL-SG2008 V3 8-Port Managed Switch
  • 3x Samsung EVO Select 128GB microSD cards (A2 rated for better I/O)
  • 3x Official Raspberry Pi power supplies (Pi 4: 15W, Pi 5: 27W each)
  • 4x Cat6 Ethernet cables (varying lengths)
  • GeekPi cluster case with cooling fans
  • 3x Heatsink kits with thermal pads

Software Stack

  • Raspberry Pi OS Lite (64-bit) - Debian-based operating system
  • K3s v1.28.3 - Lightweight Kubernetes distribution
  • Docker/containerd - Container runtime
  • Helm 3 - Kubernetes package manager
  • MetalLB - Load balancer for bare metal Kubernetes
  • Traefik - Ingress controller and reverse proxy
  • Prometheus & Grafana - Monitoring and visualization
  • Portainer - Container management UI

Network Configuration

  • Static IP addressing for all nodes
  • VLAN 10 for cluster internal communication
  • VLAN 20 for external service access
  • Link aggregation for improved bandwidth (Pi 5 nodes)
  • QoS policies for traffic prioritization
Kubernetes K3s Docker Helm Linux Networking YAML Bash Scripting
Assembly & Configuration

Physical Assembly

  1. Case Preparation: Assembled the GeekPi cluster case with proper standoffs and fan mounting
  2. Board Installation: Carefully mounted each Raspberry Pi with heatsinks and thermal pads applied
  3. Power Distribution: Organized power cables to minimize clutter and ensure proper airflow
  4. Network Cabling: Connected each Pi to the TP-Link switch with color-coded Cat6 cables
  5. Cooling Setup: Configured fan speeds for optimal temperature management (targeting 45-50°C under load)

Operating System Setup

Flashed Raspberry Pi OS Lite 64-bit to all microSD cards using Raspberry Pi Imager, with SSH enabled and initial user configuration. Set up passwordless SSH access using SSH keys for secure remote management.

  • Configured static IP addresses: Master (192.168.1.100), Workers (192.168.1.101-102)
  • Updated all systems and installed essential packages
  • Disabled swap on all nodes (required for Kubernetes)
  • Enabled cgroups in boot configuration
  • Set up NTP synchronization for accurate time keeping

Kubernetes Installation

  1. Master Node Setup: Installed K3s server with custom configuration for cluster initialization
  2. Worker Nodes: Joined worker nodes using K3s agent with master node token
  3. Network Plugin: Configured Flannel for pod networking with VXLAN backend
  4. Storage Class: Set up local-path provisioner for persistent volume claims
  5. Load Balancer: Deployed MetalLB with IP address pool for service exposure

Challenges Overcome

  • Resolved ARM64 compatibility issues with certain container images
  • Optimized memory usage to prevent OOM kills on the Pi 4 master node
  • Configured proper iptables rules for inter-node communication
  • Implemented CPU throttling prevention with active cooling
  • Debugged networking issues related to VLAN tagging on the managed switch
Deployment & Testing

Application Deployments

Successfully deployed and tested various containerized applications to validate cluster functionality:

  • WordPress + MySQL: Multi-tier web application with persistent storage
  • Pi-hole: Network-wide ad blocking and DNS management
  • Home Assistant: Home automation platform with IoT device integration
  • GitLab CE: Self-hosted Git repository and CI/CD platform
  • Nextcloud: Personal cloud storage solution with 1TB external SSD
  • Node-RED: Flow-based development tool for IoT applications

Performance Testing

  • Load tested with Apache Bench achieving 500+ requests/second for static content
  • Stress tested pod scaling from 1 to 20 replicas with sub-second scheduling
  • Network throughput testing showed 900+ Mbps between nodes
  • Database benchmarks using sysbench for MySQL performance validation
  • Container startup times averaging 2-3 seconds for typical workloads

Monitoring Implementation

  • Deployed Prometheus for metrics collection from all nodes and pods
  • Configured Grafana dashboards for cluster visualization and alerting
  • Set up node-exporter for system-level metrics (CPU, memory, disk, network)
  • Implemented log aggregation using Loki for centralized logging
  • Created custom alerts for resource utilization and service availability

High Availability Testing

  • Simulated node failures to test pod rescheduling and recovery
  • Tested rolling updates with zero-downtime deployments
  • Validated data persistence across pod restarts and migrations
  • Implemented backup strategies using Velero for disaster recovery
Results & Outcomes

Cluster Performance

  • 24/7 uptime for 3+ months
  • Average CPU usage: 35-40%
  • Memory utilization: 60-70%
  • Power consumption: ~28W total
  • Network latency: <1ms inter-node

Skills Developed

  • Kubernetes administration and troubleshooting
  • Container orchestration best practices
  • Network segmentation and VLANs
  • Infrastructure as Code (IaC) with YAML
  • Monitoring and observability practices

Project Applications

  • Self-hosted home services platform
  • Development and testing environment
  • Learning platform for cloud technologies
  • IoT data processing pipeline
  • Personal CI/CD infrastructure

Cost Analysis

Total project cost was approximately $450, including all hardware components and accessories. Compared to cloud services, the cluster pays for itself in about 6 months when running equivalent workloads. Annual electricity cost is roughly $30 at $0.12/kWh, making it extremely cost-effective for continuous operation.

Future Enhancements

  • Add NVMe storage via USB 3.0 adapters for improved I/O performance
  • Implement Istio service mesh for advanced traffic management
  • Expand to 5 nodes for true high availability with etcd quorum
  • Deploy machine learning workloads using K3s and Edge computing frameworks
  • Integrate with public cloud for hybrid cloud scenarios

Lessons Learned

This project provided invaluable hands-on experience with container orchestration, distributed systems, and infrastructure management. The Raspberry Pi platform proved to be remarkably capable for running production-grade Kubernetes workloads, despite the ARM architecture limitations. The combination of K3s and managed networking hardware created a robust, scalable platform that mirrors enterprise environments while remaining accessible for learning and experimentation. Most importantly, troubleshooting various issues deepened my understanding of Linux networking, container runtimes, and the complexity of distributed systems.