Appendix: Methods for external access to services running in a k8s cluster
Popular ways to access services running in a Kubernetes cluster from outside.
Port forwarding: for debug
NodePort: for development purposes
LoadBalancer: for production environments
Ingress: provides advanced HTTP/S routing
Traefik: Ingress controller with additional features for dynamic environments
Explanation
Port Forwarding:
Description: Port forwarding is a simple way to access a service running in a Kubernetes cluster by forwarding a local port on your machine to a port on a specific pod within the cluster.
How it works: You use the
kubectl port-forward
command to establish a connection between your local machine and a specific pod. For example:bashCopy codekubectl port-forward <pod-name> <local-port>:<pod-port>
Use Cases: Useful for debugging and accessing individual pods.
NodePort:
Description: NodePort is a service type in Kubernetes that exposes a service on a static port (NodePort) on each node in the cluster. This allows external access to the service using the node's IP address and the assigned NodePort.
How it works: The service is accessible externally at
<NodeIP>:<NodePort>
. The NodePort is usually in the range of 30000-32767.Use Cases: External access to services during development, testing, or in scenarios where LoadBalancer services are not available.
Example of Yaml:
apiVersion: v1 kind: Service metadata: name: service-entrypoint namespace: default spec: type: NodePort selector: service: server ports: - port: 80 targetPort: 8080 nodePort: 30001
apiVersion
: Specifies the version of the Kubernetes API to use. In this case, it's using the v1 version of the API.kind
: Specifies the type of Kubernetes resource being created, which is a Service in this case.metadata
: Contains metadata about the Service, including its name and namespacespec
: Describes the desired state of the Service.type: NodePort
: This line specifies the type of the service as NodePort. It means that the service will be accessible on a static port (NodePort) on each node in the cluster.selector
: Defines the set of Pods that this Service will load balance traffic to. In this case, Pods with the labelservice: server
will be selected.ports
: Specifies the ports that the service will listen on.port: 80
: This is the port that the service will listen on externally. External traffic can reach the service on this port.targetPort: 8080
: Specifies the target port on the selected Pods. Traffic received on port 80 will be forwarded to the Pods on their port 8080.nodePort: 30001
: This is the NodePort assigned to the service. It means the service will be accessible externally on each node's IP address at port 30001.
So, in summary, this Service definition named "service-entrypoint" is configured as a NodePort service, targeting Pods with the label service: server
and forwarding external traffic from port 30001 on each node to the Pods on port 8080. External access to the service can be achieved using any of the cluster's node IP addresses at port 30001.
PLEASE NOTE for Minikune. It may be additionally needed to run the following command.
minikube service service-entrypoint --url
LoadBalancer:
Description: LoadBalancer is a service type in Kubernetes that provisions an external load balancer in cloud environments. The load balancer routes external traffic to the Kubernetes service.
How it works: The external load balancer has a public IP and routes traffic to the cluster's nodes. The service is accessible externally through the load balancer's IP and port.
Use Cases: Ideal for production environments where external traffic needs to be evenly distributed across multiple nodes.
Example of Yaml
apiVersion: v1 kind: Service metadata: name: service-entrypoint namespace: default spec: type: LoadBalancer selector: service: server ports: - protocol: "TCP" port: 80 targetPort: 8080 nodePort: 30001
spec
: Describes the desired state of the Service.type: LoadBalancer
: This line specifies the type of the service as LoadBalancer. It means that the Kubernetes cluster should provision an external load balancer (cloud-specific) to route external traffic to the Service.selector
: Defines the set of Pods that this Service will load balance traffic to. In this case, Pods with the labelservice: server
will be selected.ports
: Specifies the ports that the service will listen on.protocol: "TCP"
: Specifies the protocol to use. In this case, it's TCP.port: 80
: This is the port that the service will listen on externally. External traffic will be directed to this port.targetPort: 8080
: Specifies the target port on the selected Pods. Traffic received on port 80 will be forwarded to the Pods on their port 8080.nodePort: 30001
: This is the NodePort assigned to the service. Even though the service type is LoadBalancer, a NodePort is still assigned, and it can be used as an alternative external access method if the external load balancer automatically supports NodePort (e.g., when using certain cloud providers).
So, in summary, this Service definition named "service-entrypoint" is configured as a LoadBalancer service, targeting Pods with the label service: server
and forwarding external traffic from port 80 to the Pods on port 8080. The actual external load balancer would typically have its own IP address, and the service would be accessible externally using that IP and port 80. Additionally, the NodePort 30001 can be used for external access as an alternative method if needed.
Please note that the following command is needed for Minikube
minikube tunnel
Ingress:
Description: Ingress is an API object that provides HTTP and HTTPS routing to services based on rules. It allows you to define how external traffic should be directed to your services.
How it works: Ingress controllers (like Nginx, Traefik, etc.) implement the rules defined in the Ingress resource. External traffic is routed to specific services based on the defined rules.
Use Cases: Routing and managing external HTTP/S traffic to multiple services within the cluster. Provides more advanced routing and features compared to NodePort or LoadBalancer.
Traefik:
Description: Traefik is an open-source, cloud-native edge router that can be used as an Ingress controller in Kubernetes. It is designed for dynamic environments and integrates seamlessly with container orchestration systems.
How it works: Traefik monitors the Kubernetes API for changes in Ingress resources and automatically updates its configuration. It provides features like automatic SSL certificate management, load balancing, and more.
Use Cases: Acts as an advanced Ingress controller with features like automatic SSL, dynamic routing, and integration with popular container orchestrators.