proxy workloads
proxy forwarding allows kftray
to route traffic through kubernetes clusters to reach services that aren't directly accessible from the developer's local machine. this becomes useful when working with internal services, external APIs that require cluster network access, or services behind complex network configurations.
when proxy forwarding helps
proxy forwarding comes up when someone needs to access services that the kubernetes cluster can reach but their local machine can't. this happens with internal corporate APIs, database services on private networks, or cloud services that only allow access from specific IP ranges.
instead of setting up VPN connections or complex network routing, proxy forwarding lets the cluster act as a network bridge. local applications connect to kftray
, which routes traffic through the cluster to the target service.
this approach works particularly well for development scenarios where local code needs to integrate with services that production applications access from within the cluster.
proxy configuration differences
proxy configurations use workload_type: "proxy"
instead of targeting specific kubernetes services. the key difference is the remote_address
field, which specifies the target service that the cluster should connect to:
{
"workload_type": "proxy",
"namespace": "default",
"local_port": 8080,
"remote_port": 80,
"remote_address": "internal-service.cluster.local",
"protocol": "tcp",
"alias": "internal-proxy"
}
the cluster network handles DNS resolution and routing to the target address, allowing access to services that local machines can't reach directly.
architectural requirements
proxy forwarding requires the kftray-server
component deployed in the target cluster, similar to UDP forwarding. this proxy server receives connections from kftray
and establishes outbound connections to the target services.
the architecture looks like this: local application connects to kftray
, which establishes a kubernetes port forward to the proxy server pod, which then connects to the target service specified in remote_address
.
this adds a network hop but enables access to services that would otherwise require complex networking setup or VPN configurations.
realistic usage scenarios
accessing internal corporate services
many companies run internal APIs that are only accessible from within corporate networks. when developing locally against these services, proxy forwarding allows cluster-based access:
{
"workload_type": "proxy",
"remote_address": "auth-service.internal.company.com",
"local_port": 8443,
"remote_port": 443,
"protocol": "tcp",
"alias": "corp-auth"
}
local development code can connect to localhost:8443 and reach the internal authentication service through the cluster's network connection.
cloud service integration
cloud services often restrict access to specific IP ranges or require connections from within the cloud provider's network. proxy forwarding enables local development against these services:
{
"workload_type": "proxy",
"remote_address": "database.region.rds.amazonaws.com",
"local_port": 5432,
"remote_port": 5432,
"protocol": "tcp",
"alias": "cloud-db"
}
this allows local applications to connect to cloud databases that only allow access from cluster IP addresses.
service mesh integration
complex service mesh environments sometimes make direct service access difficult from outside the cluster. proxy forwarding can route through the mesh:
{
"workload_type": "proxy",
"remote_address": "payment-service.payments.svc.cluster.local",
"local_port": 8080,
"remote_port": 8080,
"protocol": "tcp",
"alias": "mesh-payments"
}
the cluster handles service mesh routing, authentication, and policy enforcement that would be complex to replicate in local development.
configuration considerations and limitations
proxy forwarding works well for most HTTP, database, and API services but adds latency due to the extra network hop through the cluster. very high-throughput applications may notice performance impact.
the remote_address
field must be resolvable from within the cluster network. this includes kubernetes service DNS names, external hostnames that the cluster can reach, and IP addresses accessible from cluster nodes.
network policies and security groups must allow the proxy server pod to connect to the target service. some environments restrict outbound connections from pods, which can prevent proxy forwarding from working.
security and access control considerations
proxy forwarding potentially exposes services that were previously inaccessible from developer machines. teams should consider whether this expanded access aligns with security policies and network segmentation requirements.
the proxy server runs with the same network access as other pods in its namespace, so namespace selection can affect what services become reachable. placing proxy servers in restricted namespaces limits potential security exposure.
audit trails become important when proxy forwarding provides access to sensitive services. monitoring proxy usage helps understand what services developers are accessing through the cluster network.
troubleshooting proxy connections
when proxy forwarding fails, the problem usually lies in network connectivity between the cluster and target service. test connectivity from within the cluster using a debug pod:
kubectl run debug --image=busybox -it --rm -- sh
# then test: wget -O- http://target-service:port
DNS resolution problems show up as "host not found" errors. verify that the remote_address
resolves correctly from within the cluster network.
firewall or security group restrictions cause "connection refused" or timeout errors. check that the cluster can reach the target service on the specified port.
authentication issues may require cluster-based credentials or certificates that aren't available to the proxy server pod. some services expect requests to come from specific source IP ranges.
performance and resource considerations
proxy forwarding adds network latency compared to direct service access or standard port forwarding. the extra hop through the cluster typically adds 10-50ms depending on cluster network configuration.
each proxy configuration deploys a separate proxy server pod, consuming cluster resources. very large numbers of proxy forwards may impact cluster capacity.
connection limits depend on both the proxy server pod resources and the target service's capacity. high-throughput applications may need resource adjustments for the proxy deployment.
when to avoid proxy forwarding
direct service access or VPN connections often provide better performance and simpler architecture than proxy forwarding. teams with existing network connectivity solutions may not benefit from the added complexity.
services that require specific client certificates, source IP validation, or complex authentication may not work well through proxy forwarding. the proxy server acts as an intermediary that may not preserve all client connection characteristics.
very high-security environments may prefer more controlled access methods rather than expanding service accessibility through proxy forwarding.
integration with development workflows
proxy forwarding works well for development scenarios where local code needs to integrate with production-like services. this enables realistic testing without requiring complex local infrastructure setup.
teams often use proxy forwarding during feature development to test against shared services, then deploy to the cluster for integration testing with the same services accessed directly.
the approach scales well for microservices development where local services need to interact with dozens of backend services that are impractical to run locally.
- when proxy forwarding helps
- proxy configuration differences
- architectural requirements
- realistic usage scenarios
- configuration considerations and limitations
- security and access control considerations
- troubleshooting proxy connections
- performance and resource considerations
- when to avoid proxy forwarding
- integration with development workflows