Shedrack Akintayo

Contents

Published on

Connecting to Google Kubernetes Engine (GKE) Clusters Locally

Authors

When working with Google Kubernetes Engine (GKE) clusters, especially in team environments or CI/CD pipelines, you'll often find yourself needing to connect to clusters that were created by automated systems or other team members. This guide walks through the complete process of connecting to a GKE cluster locally using a kubeconfig file, even when you weren't the one who originally created the cluster.

The Scenario

Imagine you're part of a team that uses automated infrastructure provisioning. A cluster gets created automatically (perhaps through a GitHub Actions workflow or Terraform), and you receive the kubeconfig details. Now you need to connect to this cluster from your local machine to deploy applications, debug issues, or run kubectl commands.

This is exactly what happened to me recently when working with an on-demand environment system that automatically provisioned GKE clusters with GPU support for testing machine learning workloads.

Prerequisites

Before we dive in, you'll need:

  • macOS (this guide is Mac-specific, but concepts apply to other platforms)
  • Homebrew package manager
  • kubectl (likely already installed if you're working with Kubernetes)
  • Access to the kubeconfig file for your target cluster

Step 1: Understanding the Kubeconfig

A kubeconfig file contains all the information kubectl needs to connect to your cluster:

  • Cluster endpoint URL and certificates
  • User authentication details
  • Context information (which cluster/user combination to use)

Here's what a typical GKE kubeconfig structure looks like:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <base64-encoded-cert>
    server: https://your-cluster-endpoint
  name: gke_project_zone_cluster-name
contexts:
- context:
    cluster: gke_project_zone_cluster-name
    user: gke_project_zone_cluster-name
  name: gke_project_zone_cluster-name
current-context: gke_project_zone_cluster-name
kind: Config
users:
- name: gke_project_zone_cluster-name
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      command: gke-gcloud-auth-plugin
      provideClusterInfo: true

The key thing to notice is the gke-gcloud-auth-plugin command in the user configuration. This is what handles authentication with Google Cloud.

Step 2: Installing Required Tools

Install Google Cloud SDK

brew install --cask google-cloud-sdk

After installation, you'll need to source your shell configuration to make gcloud available:

source ~/.zshrc  # or ~/.bashrc depending on your shell

Verify the installation:

gcloud version

Install the GKE Auth Plugin

This is the crucial component that enables kubectl to authenticate with GKE clusters:

gcloud components install gke-gcloud-auth-plugin

Step 3: Authentication with Google Cloud

Before you can connect to any GKE cluster, you need to authenticate with Google Cloud:

gcloud auth login

This will open your browser for the OAuth flow. After authentication, set your project:

gcloud config set project your-project-id

Step 4: Using Your Kubeconfig

Once you have your kubeconfig file (let's say it's saved as cluster-config.yaml), you have a few options for using it:

Option 1: Set KUBECONFIG Environment Variable

export KUBECONFIG=/path/to/your/cluster-config.yaml
kubectl cluster-info

Option 2: Use the --kubeconfig Flag

kubectl --kubeconfig=/path/to/your/cluster-config.yaml get nodes

Option 3: Merge with Your Default Kubeconfig

# Backup your existing config
cp ~/.kube/config ~/.kube/config.backup

# Set both configs in KUBECONFIG
export KUBECONFIG=~/.kube/config:/path/to/your/cluster-config.yaml

# Merge them
kubectl config view --merge --flatten > ~/.kube/config.merged
mv ~/.kube/config.merged ~/.kube/config

Step 5: Verifying Your Connection

Let's test the connection with some basic commands:

# Check cluster info
kubectl cluster-info

# List nodes
kubectl get nodes -o wide

# Check running pods across all namespaces
kubectl get pods --all-namespaces

If you see output similar to this, you're successfully connected:

Kubernetes control plane is running at https://your-cluster-endpoint
GLBCDefaultBackend is running at https://your-cluster-endpoint/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
KubeDNS is running at https://your-cluster-endpoint/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Common Issues and Troubleshooting

"gke-gcloud-auth-plugin not found"

This means the GKE auth plugin isn't installed or not in your PATH. Make sure you've run:

gcloud components install gke-gcloud-auth-plugin

"Unable to connect to the server"

This could be due to:

  1. Network connectivity issues
  2. Incorrect cluster endpoint in kubeconfig
  3. Authentication problems

Verify your authentication:

gcloud auth list
gcloud config get-value project

Certificate Issues

If you're getting certificate validation errors, ensure the certificate-authority-data in your kubeconfig is correct and properly base64 encoded.

Working with Multiple Clusters

When you have multiple clusters, you can switch between them using contexts:

# List available contexts
kubectl config get-contexts

# Switch to a specific context
kubectl config use-context gke_project_zone_cluster-name

# Check current context
kubectl config current-context

Security Best Practices

  1. Store kubeconfig files securely: Don't commit them to version control
  2. Use temporary locations: Consider using /tmp for kubeconfig files that you'll only use temporarily
  3. Regular cleanup: Remove old kubeconfig files and unused contexts
  4. Principle of least privilege: Ensure your Google Cloud user has only the necessary permissions

Conclusion

Connecting to GKE clusters locally using kubeconfig files is straightforward once you have the right tools installed. The key components are:

  1. Google Cloud SDK with the GKE auth plugin
  2. Proper authentication with Google Cloud
  3. A valid kubeconfig file
  4. Understanding how to manage multiple cluster contexts

This approach is particularly useful in team environments where clusters are provisioned automatically, or when you need to access clusters created by CI/CD systems. The ability to quickly connect to any GKE cluster with just a kubeconfig file makes it easy to troubleshoot, deploy applications, or perform administrative tasks across multiple environments.

Remember to always follow security best practices when handling kubeconfig files, as they contain sensitive authentication information that could provide access to your Kubernetes clusters.