[6/24] R is for ReplicaSets: Ensuring High Availability
π This is Post #6 in the Kubernetes A-to-Z Series
Reading Order: β Previous: Services β Next: Jobs β
Series Progress: 6/24 complete | Difficulty: Intermediate | Time: 25-30 min | Part 2/6: Core Workloads
Welcome to the sixth post in our Kubernetes A-to-Z Series! Now that you understand Services, letβs explore ReplicaSets - the mechanism that ensures your applications maintain the desired number of pod replicas for high availability. While Deployments manage ReplicaSets automatically, understanding ReplicaSets is crucial for advanced Kubernetes operations.
What is a ReplicaSet?
A ReplicaSet is a Kubernetes workload that maintains a stable set of replica pods running at any given time. It ensures high availability by automatically replacing failed pods and scaling applications based on demand.
ReplicaSet vs Deployment vs Pod
Pod (Single Instance):
βββββββββββββββββββββββ
β Pod 1 β
β ββββββββββββ β
β βContainer β β
β β(my-app) β β
β ββββββββββββ β
βββββββββββββββββββββββ
Single point of failure
ReplicaSet (Multiple Instances):
βββββββββββββββββββββββββββββββββββββββ
β ReplicaSet Controller β
β Desired: 3 replicas β
β βββββββ βββββββ βββββββ β
β βPod1 β βPod2 β βPod3 β β
β βββββββ βββββββ βββββββ β
βββββββββββββββββββββββββββββββββββββββ
High availability, self-healing
Deployment (Manages ReplicaSets):
βββββββββββββββββββββββββββββββββββββββ
β Deployment Controller β
β βββββββββββββββββββββββββββββββ β
β β ReplicaSet β β
β β βββββββ βββββββ βββββββ β β
β β βPod1 β βPod2 β βPod3 β β β
β β βββββββ βββββββ βββββββ β β
β βββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββ
Manages ReplicaSet lifecycle
Key ReplicaSet Features
- High Availability: Maintains desired replica count automatically
- Self-Healing: Replaces failed or deleted pods
- Scaling: Supports horizontal pod scaling
- Pod Distribution: Can distribute pods across nodes and zones
- Rolling Updates: Works with Deployments for zero-downtime updates
Creating ReplicaSets
Basic ReplicaSet YAML
# replicaset-basic.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: webapp-replicaset
labels:
app: webapp
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: webapp
image: myapp:v1.0
ports:
- containerPort: 8080
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Creating and Managing ReplicaSets
# Create ReplicaSet
kubectl apply -f replicaset-basic.yaml
# Get ReplicaSets
kubectl get replicasets
kubectl get replicaset webapp-replicaset
# Describe ReplicaSet
kubectl describe replicaset webapp-replicaset
# Get pods managed by ReplicaSet
kubectl get pods -l tier=frontend
# Scale ReplicaSet
kubectl scale replicaset webapp-replicaset --replicas=5
# Delete ReplicaSet
kubectl delete replicaset webapp-replicaset
ReplicaSet vs Deployment
When to Use ReplicaSets Directly
# replicaset-advanced.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: monitoring-replicaset
labels:
app: monitoring-agent
spec:
replicas: 1
selector:
matchLabels:
app: monitoring-agent
template:
metadata:
labels:
app: monitoring-agent
spec:
containers:
- name: monitoring-agent
image: monitoring-agent:v1.0
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
volumeMounts:
- name: host-root
mountPath: /host
readOnly: true
volumes:
- name: host-root
hostPath:
path: /
Deployment Managing ReplicaSet (Recommended)
# deployment-with-replicaset.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
spec:
replicas: 3
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: myapp:v1.0
ports:
- containerPort: 8080
Advanced ReplicaSet Features
1. Multi-Zone Distribution
# multi-zone-replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: webapp-multi-zone
spec:
replicas: 6
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- webapp
topologyKey: topology.kubernetes.io/zone
containers:
- name: webapp
image: myapp:v1.0
ports:
- containerPort: 8080
2. ReplicaSet with Node Affinity
# replicaset-with-affinity.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: webapp-affinity
spec:
replicas: 3
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-type
operator: In
values:
- compute
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- webapp
topologyKey: kubernetes.io/hostname
containers:
- name: webapp
image: myapp:v1.0
ports:
- containerPort: 8080
Scaling Strategies
1. Manual Scaling
# Scale ReplicaSet manually
kubectl scale replicaset webapp-replicaset --replicas=5
# Scale multiple ReplicaSets
kubectl scale replicaset webapp-replicaset backend-replicaset --replicas=3
2. Horizontal Pod Autoscaling (HPA)
# hpa-replicaset.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: webapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: ReplicaSet
name: webapp-replicaset
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
# Apply HPA
kubectl apply -f hpa-replicaset.yaml
# Check HPA status
kubectl get hpa webapp-hpa
# Watch HPA in action
kubectl get hpa webapp-hpa --watch
Pod Disruption Budgets (PDBs)
Understanding PDBs
# pdb-example.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: webapp-pdb
spec:
minAvailable: 2 # Minimum pods that must be available
# maxUnavailable: 1 # Alternative: maximum pods that can be unavailable
selector:
matchLabels:
app: webapp
PDB with ReplicaSet
# replicaset-with-pdb.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: webapp-replicaset
spec:
replicas: 5
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: myapp:v1.0
ports:
- containerPort: 8080
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: webapp-pdb
spec:
minAvailable: 3 # At least 3 pods must be available
selector:
matchLabels:
app: webapp
High Availability Patterns
1. Multi-Region Deployment
# multi-region-replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: webapp-region-a
labels:
app: webapp
region: us-east-1
spec:
replicas: 3
selector:
matchLabels:
app: webapp
region: us-east-1
template:
metadata:
labels:
app: webapp
region: us-east-1
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- us-east-1a
- us-east-1b
containers:
- name: webapp
image: myapp:v1.0
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: webapp-region-b
labels:
app: webapp
region: us-west-2
spec:
replicas: 3
selector:
matchLabels:
app: webapp
region: us-west-2
template:
metadata:
labels:
app: webapp
region: us-west-2
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- us-west-2a
- us-west-2b
containers:
- name: webapp
image: myapp:v1.0
ports:
- containerPort: 8080
2. Active-Passive Setup
# active-passive-replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: webapp-active
labels:
app: webapp
role: active
spec:
replicas: 5
selector:
matchLabels:
app: webapp
role: active
template:
metadata:
labels:
app: webapp
role: active
spec:
priorityClassName: high-priority
containers:
- name: webapp
image: myapp:v1.0
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: webapp-passive
labels:
app: webapp
role: passive
spec:
replicas: 2
selector:
matchLabels:
app: webapp
role: passive
template:
metadata:
labels:
app: webapp
role: passive
spec:
priorityClassName: low-priority
containers:
- name: webapp
image: myapp:v1.0
ports:
- containerPort: 8080
ReplicaSet Troubleshooting
Common Issues and Solutions
# Check ReplicaSet status
kubectl get replicasets
kubectl describe replicaset webapp-replicaset
# Check pod status
kubectl get pods -l app=webapp
# Check pod events
kubectl get events --field-selector involvedObject.name=webapp-replicaset
# Check if pods match selector
kubectl get pods --show-labels | grep webapp
# Check node resources
kubectl top nodes
kubectl describe nodes
# Check pod logs
kubectl logs -l app=webapp --tail=50
Debugging ReplicaSet Issues
# Check if ReplicaSet is at desired replica count
kubectl get replicaset webapp-replicaset
# Check for pod creation failures
kubectl get events --sort-by='.lastTimestamp' | grep webapp
# Check node affinity issues
kubectl describe pods -l app=webapp | grep -A 10 "Node-Selectors"
# Check resource constraints
kubectl describe pods -l app=webapp | grep -A 5 "Events:"
# Force ReplicaSet to recreate pods
kubectl delete pods -l app=webapp
ReplicaSet Best Practices
1. Resource Management
# replicaset-best-practices.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: webapp-replicaset
spec:
replicas: 3
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: myapp:v1.0
ports:
- containerPort: 8080
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
2. Pod Disruption Budgets
# high-availability-setup.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: webapp-ha-replicaset
spec:
replicas: 5
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- webapp
topologyKey: kubernetes.io/hostname
containers:
- name: webapp
image: myapp:v1.0
ports:
- containerPort: 8080
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: webapp-pdb
spec:
minAvailable: 3
selector:
matchLabels:
app: webapp
Key Takeaways
- ReplicaSets ensure high availability by maintaining desired pod count
- Self-healing automatically replaces failed or deleted pods
- Scaling can be manual or automatic with HPA
- Pod Disruption Budgets maintain availability during updates
- Multi-zone distribution provides geographic redundancy
- Health probes ensure only healthy pods are counted
- Resource management prevents resource contention
Command Reference Cheatsheet
# ReplicaSet Management
kubectl get replicasets
kubectl describe replicaset webapp-replicaset
kubectl scale replicaset webapp-replicaset --replicas=5
# Pod Disruption Budgets
kubectl get poddisruptionbudgets
kubectl create pdb webapp-pdb --selector=app=webapp --min-available=3
# Scaling and Autoscaling
kubectl scale replicaset webapp-replicaset --replicas=10
kubectl autoscale replicaset webapp-replicaset --min=2 --max=10 --cpu-percent=70
# High Availability
kubectl get pods -l app=webapp -o wide
kubectl get nodes --show-labels | grep zone
kubectl top pods -l app=webapp
# Debugging
kubectl get events --field-selector involvedObject.name=webapp-replicaset
kubectl logs -l app=webapp --tail=50
Next Steps
Now that you understand ReplicaSets and high availability patterns, youβre ready to explore Namespaces in the next post. Weβll learn how to organize your cluster, implement multi-tenancy, and manage resources effectively across different teams and applications.
Resources for Further Learning
- Official Kubernetes ReplicaSet Documentation
- Pod Disruption Budgets Guide
- High Availability Best Practices
- Multi-Zone Deployment Patterns
Series Navigation:
Complete Series: Kubernetes A-to-Z Series Overview