Hello. In this post, I'll show you how to set up a local Kubernetes cluster using Kind, Terraform, and Kftray. We'll keep all services inside the cluster, avoiding the need for ingress controllers or exposing services like NodePort or LoadBalancer. I'll also walk you through the Terraform code used in this setup.
Terraform code used in this Blog Post: https://github.com/hcavarsan/kftray-k8s-tf-example
Why Keep Services Inside the Cluster?
Exposing services externally can add complexity and potential security risks. For local development or secure environments, it's often better to keep everything internal. By using kubectl port-forward
and automating it with Kftray, we can access services running inside the cluster without exposing them to the outside world.
Tools You'll Need
Before starting, make sure you have the following installed:
- Docker
- Terraform (v1.9.5)
- Kftray (you can choose between the GUI or TUI version)
Cloning the Repository
First, clone the repository that contains the Terraform code:
git clone https://github.com/hcavarsan/kftray-k8s-tf-example
cd kftray-k8s-tf-example/terraform
Understanding the Terraform Code
The Terraform code in this repository automates the following:
- Creates a Kind Kubernetes cluster.
- Deploys Helm charts for Argo CD, Prometheus, Alertmanager, Grafana, and Jaeger.
- Sets up service annotations for Kftray to auto-import port-forward configurations.
Project Structure
Here's how the project is organized:
kftray-k8s-tf-example
├── terraform
│ ├── helm.tf
│ ├── outputs.tf
│ ├── locals.tf
│ ├── providers.tf
│ ├── variables.tf
│ ├── templates
│ │ ├── argocd-values.yaml.tpl
│ │ ├── grafana-values.yaml.tpl
│ │ ├── jaeger-values.yaml.tpl
│ │ └── prometheus-values.yaml.tpl
│ └── kind.tf
├── Makefile
├── docs
│ ├── kftray.gif
│ └── kftui.gif
└── README.md
providers.tf
This file specifies the Terraform providers we'll use:
terraform {
required_version = ">= 1.0.0"
required_providers {
kind = {
source = "tehcyx/kind"
version = "0.4.0"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.0.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.0"
}
template = {
source = "hashicorp/template"
version = ">= 2.0.0"
}
}
}
provider "kind" {
}
provider "kubernetes" {
config_path = kind_cluster.default.kubeconfig_path
}
provider "helm" {
kubernetes {
config_path = kind_cluster.default.kubeconfig_path
}
}
- kind: Manages Kind clusters.
- kubernetes: Interacts with the Kubernetes cluster.
- helm: Deploys Helm charts.
- template: Processes template files.
variables.tf
Defines variables used in the Terraform configuration:
variable "cluster_name" {
description = "Name of the Kind cluster"
type = string
default = "kftray-cluster"
}
variable "kubernetes_version" {
description = "Version of the Kind node image"
type = string
default = "v1.30.4"
}
variable "kubeconfig_dir" {
description = "Directory to store the kubeconfig file"
type = string
default = "~/.kube"
}
# Chart versions
variable "argocd_chart_version" {
description = "Version of the Argo CD Helm chart"
type = string
default = "5.19.12"
}
variable "prometheus_chart_version" {
description = "The version of the Prometheus chart to deploy."
type = string
default = "25.27.0"
}
variable "grafana_chart_version" {
description = "The version of the Grafana chart to deploy."
type = string
default = "8.5.0"
}
variable "jaeger_chart_version" {
description = "The version of the Jaeger chart to deploy."
type = string
default = "3.3.1"
}
kind.tf
Creates the Kind cluster:
resource "kind_cluster" "default" {
name = var.cluster_name
node_image = "kindest/node:${var.kubernetes_version}"
kubeconfig_path = pathexpand("${var.kubeconfig_dir}/kind-config-${var.cluster_name}")
wait_for_ready = true
kind_config {
kind = "Cluster"
api_version = "kind.x-k8s.io/v1alpha4"
node {
role = "control-plane"
extra_port_mappings {
container_port = 80
host_port = 80
protocol = "TCP"
}
}
node {
role = "worker"
}
}
}
- name: Cluster name.
- node_image: Kubernetes version.
- kubeconfig_path: Where to store the kubeconfig file.
- wait_for_ready: Wait until the cluster is ready.
- kind_config: Custom Kind configuration.
locals.tf
Defines local variables and service configurations:
locals {
services = {
argocd = {
namespace = "argocd"
repository = "https://argoproj.github.io/argo-helm"
chart = "argo-cd"
version = var.argocd_chart_version
kftray = {
server = {
alias = "argocd"
local_port = "16080"
target_port = "http"
}
}
}
# ... other services ...
}
services_values = {
for service_name, service in local.services :
service_name => templatefile("${path.module}/templates/${service_name}-values.yaml.tpl", {
kftray = service.kftray
})
}
}
- services: A map of services to deploy.
- kftray: Port-forward configurations for Kftray.
- services_values: Processes Helm values templates for each service.
helm.tf
Deploys the services using Helm:
resource "helm_release" "services" {
depends_on = [kind_cluster.default]
for_each = local.services
name = each.key
namespace = each.value.namespace
create_namespace = true
repository = each.value.repository
chart = each.value.chart
version = each.value.version
values = [
local.services_values[each.key]
]
}
- for_each: Iterates over each service.
- name: Release name.
- namespace: Kubernetes namespace.
- repository: Helm chart repository.
- chart: Helm chart name.
- version: Chart version.
- values: Custom values for the Helm chart.
templates/
Contains Helm values templates for each service (e.g., argocd-values.yaml.tpl
). These templates inject the Kftray annotations into the service definitions.
outputs.tf
Defines outputs for the Terraform run:
output "endpoint" {
description = "API endpoint for the Kind cluster."
value = kind_cluster.default.endpoint
}
output "kubeconfig" {
description = "Kubeconfig file for the Kind cluster."
value = kind_cluster.default.kubeconfig
sensitive = true
}
output "credentials" {
description = "Credentials for authenticating with the Kind cluster."
value = {
client_certificate = kind_cluster.default.client_certificate
client_key = kind_cluster.default.client_key
cluster_ca_certificate = kind_cluster.default.cluster_ca_certificate
}
sensitive = true
}
Applying the Terraform Configuration
To apply the Terraform configuration and set up the cluster, run:
make apply
This will:
- Initialize Terraform.
- Create the Kind cluster.
- Deploy the Helm charts.
- Set up the service annotations.
Installing Kftray
Go to the Kftray GitHub page and follow the installation instructions for your operating system.
Importing Port-Forward Configurations into Kftray
Using Kftray GUI
- Open Kftray and click the tray icon to open the main window.
- Click the menu icon at the bottom left corner.
- Select "Auto Import."
- Click "Set kubeconfig" and choose the kubeconfig file created by Terraform (usually
~/.kube/kind-config-kftray-cluster
). - Select the context
kftray-cluster
from the dropdown menu. - Click "Import" to load the port-forward settings.
![Kftray Banner](https://raw.githubusercontent.com/hcavarsan/kftray-k8s-tf-example/refs/heads/main/docs/kftray.gif align="left")
After importing, you can start port forwarding by toggling the switch next to each service or by clicking "Start All."
Using Kftui (Terminal Interface)
- Set the
KUBECONFIG
environment variable:export KUBECONFIG="~/.kube/kind-config-kftray-cluster"
- Start Kftui:
kftui
- Press
Tab
to access the top menu and select "Auto Import." - Press
Ctrl+A
to select all configurations. - Press
F
to start all port forwards.
![kftui](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pj66296vgfaalg960mkk.gif align="left")
Accessing Your Services Locally
With port forwarding set up, you can access your services on your local machine:
- Argo CD: http://localhost:16080
- Prometheus: http://localhost:19090
- Alertmanager: http://localhost:19093
- Grafana: http://localhost:13080
- Jaeger: http://localhost:15090
Custom Kftray Settings
Adjusting Kftray Port Forwarding Settings
To customize how Kftray forwards ports, edit the locals.tf
file:
locals {
services = {
argocd = {
kftray = {
server = {
alias = "argocd"
local_port = "16080"
target_port = "http"
}
}
}
# Other services...
}
}
- alias: The name displayed in Kftray.
- local_port: The port on your machine to access the service.
- target_port: The service's port or port name inside the cluster.
Cleaning Up
If you want to destroy the cluster and remove all resources, run:
make destroy
This will tear down the cluster and delete all the resources created by Terraform.
Conclusion
By keeping all services inside the cluster and using Kftray for port forwarding, we create a simpler and more secure environment. This setup is useful for local development and situations where you want to avoid exposing services externally.
Feel free to explore and modify the Terraform code to suit your needs. If you have any questions or need help, feel free to reach out.
Thanks for reading. You can find more of my work or get in touch on GitHub.