Optimizing Kubernetes Node Placements Based on User Footprint and Latency

Optimizing Kubernetes Node Placements Based on User Footprint and Latency

Welcome to Part I of my Kubernetes series, where we dive into Optimizing Node Placements Based on User Footprint and Latency. In today’s world of global-scale applications, every millisecond counts. We’ll explore how to strategically place Kubernetes nodes across multiple regions and edge locations, minimizing latency and enhancing performance. Whether running a global SaaS platform or a latency-sensitive app, this article will show you how to harness Kubernetes for smarter, user-centric infrastructure that scales seamlessly globally.

Why Optimize Node Placement?

User experience is directly tied to application performance. When users are served from nodes closest to their geographical location, latency is reduced, leading to faster load times and improved interaction. By optimizing Kubernetes node placements based on user footprints, you can achieve a more responsive application while efficiently utilizing resources.

Real-World Example: Consider an e-commerce platform with users distributed across Europe and North America. By deploying Kubernetes clusters in both regions, the platform can minimize latency for users by routing them to the nearest cluster. Using the above configurations, when a user in Europe accesses the application, they are directed to the Europe West cluster, significantly reducing latency and improving their shopping experience.

Step 1: Set Up Multi-Region Kubernetes Clusters

To implement an architecture that optimizes Kubernetes node placements, first create Kubernetes clusters in different regions. Below are commands to create clusters in Google Kubernetes Engine (GKE) and Amazon EKS.

Google Kubernetes Engine (GKE)

# Create a cluster in US Central
gcloud container clusters create my-cluster-us-central --region=us-central1

# Create a cluster in Europe West
gcloud container clusters create my-cluster-europe-west --region=europe-west1

Amazon EKS

# Create an EKS cluster in US East
eksctl create cluster --name my-cluster-us-east --region us-east-1 --nodegroup-name standard-nodes --node-type t2.micro --nodes 3

# Create an EKS cluster in Europe
eksctl create cluster --name my-cluster-eu-west --region eu-west-1 --nodegroup-name standard-nodes --node-type t2.micro --nodes 3

Step 2: Configure Node Labels for Regions

Label your nodes to indicate their regions. This helps define node affinity for scheduling.

# GKE: Label nodes in US Central
kubectl label nodes <us-central-node-name> region=us-central

# GKE: Label nodes in Europe West
kubectl label nodes <europe-west-node-name> region=europe-west

# EKS: Label nodes similarly
kubectl label nodes <us-east-node-name> region=us-east
kubectl label nodes <eu-west-node-name> region=eu-west

Step 3: Define Node Affinity in Deployment

Use node affinity rules to ensure that your pods are scheduled on the closest nodes based on the user's location.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: region
                    operator: In
                    values:
                      - europe-west # or us-central based on the user's location
      containers:
        - name: web-app
          image: my-web-app:latest
          ports:
            - containerPort: 80

Step 4: Customize kube-scheduler

To leverage custom scheduling based on latency, implement a custom scheduler plugin. Here’s a simplified way to do it:

  1. Create a Custom Scheduler: Set up a new scheduler that will consider latency.
apiVersion: v1
kind: ConfigMap
metadata:
  name: custom-scheduler-config
  namespace: kube-system
data:
  config: |
    apiVersion: kubescheduler.config.k8s.io/v1beta1
    kind: KubeSchedulerConfiguration
    profiles:
      - schedulerName: custom-scheduler
        plugins:
          score:
            enabled:
              - name: LatencyScoringPlugin # Custom plugin for latency scoring
  1. Deploy the Custom Scheduler: Use the following YAML to deploy the custom scheduler.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: custom-scheduler
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: custom-scheduler
  template:
    metadata:
      labels:
        app: custom-scheduler
    spec:
      containers:
        - name: custom-scheduler
          image: my-custom-scheduler:latest
          command:
            - /bin/sh
            - -c
            - >
              kube-scheduler --config=/etc/kubernetes/scheduler-config/config --v=4
          volumeMounts:
            - name: scheduler-config
              mountPath: /etc/kubernetes/scheduler-config
      volumes:
        - name: scheduler-config
          configMap:
            name: custom-scheduler-config

Step 5: Set Up Service Mesh for Traffic Management

Use Istio to manage traffic between regions and optimize based on latency.

  1. Install Istio:
curl -L https://istio.io/downloadIstio | sh -
cd istio-*
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo -y
  1. Define Virtual Services and Destination Rules: Route traffic based on proximity.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: web-app
spec:
  hosts:
    - web-app
  http:
    - route:
        - destination:
            host: web-app
            subset: europe-west
          weight: 80
        - destination:
            host: web-app
            subset: us-central
          weight: 20

---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: web-app
spec:
  host: web-app
  subsets:
    - name: europe-west
      labels:
        region: europe-west
    - name: us-central
      labels:
        region: us-central

Step 6: Test the Deployment

To validate the deployment and performance optimization, use a load testing tool like k6 to simulate user traffic and measure latency.

  1. Install k6:
brew install k6 # For macOS
  1. Create a test script (script.js):
import http from 'k6/http';
import { sleep } from 'k6';

export default function () {
    http.get('http://<web-app-url>');
    sleep(1);
}
  1. Run the load test:
k6 run --vus 10 --duration 30s script.js

Step 7: Monitor and Optimize

Use Kubernetes monitoring tools like Prometheus and Grafana to visualize performance metrics and optimize further based on real-time data.

  1. Install Prometheus:
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/master/bundle.yaml
  1. Install Grafana:
kubectl apply -f https://raw.githubusercontent.com/grafana/helm-charts/main/charts/grafana/templates/deployment.yaml
  1. Access the Grafana dashboard:

Find the service's external IP and access it via your browser.

Conclusion

By following these steps, you can successfully implement a multi-region Kubernetes architecture that optimizes node placement based on user footprint and latency. The combination of custom scheduling, node affinity, and traffic management will significantly enhance user experience through reduced latency. Continuously monitor performance to make necessary adjustments and keep your deployment running smoothly.

Additional Resources

  1. Kubernetes Node Affinity and Anti-Affinity

  2. Kube-Scheduler Overview

  3. GKE Multi-Zone and Multi-Region Clusters

  4. Creating EKS Clusters in Multiple Regions

  5. Istio Virtual Service and Destination Rules

  6. Prometheus Operator Overview

  7. Deploying Grafana on Kubernetes

Stay tuned for Part 2, where we’ll dive into Tweaking Kubernetes Deployments for Enhanced Backward Compatibility — exploring techniques to ensure smooth upgrades and seamless support for legacy systems in your Kubernetes environments.


Feel free to subscribe to my newsletter and follow me on LinkedIn