Linkerd is an exceptional service mesh component designed for cloud-native applications and is amongst the CNCF (Cloud Native Computing Foundation) projects. It offers a uniform management and control plane for inter-service communication, decoupling the application logic from communication mechanisms. Remarkably, this enables you to gain visual control over service communication without altering your application. Additionally, Linkerd instances are stateless, and you can swiftly deploy them in two ways: one instance per application (sidecar) or one instance per Node.
The laudable features that Linkerd brings to the table include:
Service Discovery
Dynamic request routing
Integration with HTTP proxy, with support for protocols like HTTP, TLS, gRPC, HTTP/2, etc.
Latency-aware load balancing that supports numerous load-balancing algorithms, such as Power of Two Choices (P2C) Least Loaded, Power of Two Choices (P2C) peak ewma, Aperture: least loaded, Heap: least loaded, Round robin, and more.
A robust circuit breaker mechanism, which automatically eliminates unhealthy backend instances, including fail-fast (instance removal upon connection failure) and failure accrual (marks as failed and reserves recovery time when more than 5 request handling fails)
Distributed tracing and metrics
Under the Hood of Linkerd (Linkerd Principle)
Linkerd breaks down request handling into multiple steps –
(1) IDENTIFICATION: Assigning a logical name (i.e., the target service of the request) to the actual request, such as assigning the name '/svc/example' to the HTTP request 'GET http://example/hello' by default.
(2) BINDING: dtabs are responsible for binding the logical name with the client name, with client names always beginning with '/#' or '/$', like:
# Assuming dtab is/env =>/#/io.l5d.serversets/discovery/svc =>/env/prod# Then the service name /svc/users will be bound as/svc/users/env/prod/users/#/io.l5d.serversets/discovery/prod/users
(3) RESOLUTION: namer resolves the client name, eventually unearthing the actual service address (IP + port)
(4) LOAD BALANCING: Deciding on how to send requests according to the load-balancing algorithm.
Deploying Linkerd
Linkerd is deployed as a DaemonSet on every Node node:
By default, Linkerd's Dashboard listens at port 9990 of each container instance (note that it is not exposed in l5d service), and can be accessed through the corresponding port of the service.
kubectl-nlinkerdport-forward $(kubectl-nlinkerdgetpod-lapp=l5d-ojsonpath='{.items[0].metadata.name}') 9990&echo"open http://localhost:9990 in browser"
# Deploy zipkin.kubectl-nlinkerdapply-fhttps://github.com/linkerd/linkerd-examples/raw/master/k8s-daemonset/k8s/zipkin.yml# Deploy linkerd for zipkin.kubectl-nlinkerdapply-fhttps://github.com/linkerd/linkerd-examples/raw/master/k8s-daemonset/k8s/linkerd-zipkin.yml# Get zipkin endpoint.ZIPKIN_LB=$(kubectlgetsvczipkin-ojsonpath="{.status.loadBalancer.ingress[0].*}")echo"open http://$ZIPKIN_LB in browser"
Linkerd can also be used as a Kubernetes Ingress Controller. Be aware that the following steps deploy Linkerd to the l5d-system namespace.
$kubectlcreatensl5d-system$ kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/linkerd-ingress-controller.yml -n l5d-system
# If load balancer is supported in the Kubernetes cluster$L5D_SVC_IP=$(kubectlgetsvcl5d-nl5d-system-ojsonpath="{.status.loadBalancer.ingress[0].*}")$echoopenhttp://$L5D_SVC_IP:9990# Or else$HOST_IP=$(kubectlgetpo-lapp=l5d-nl5d-system-ojsonpath="{.items[0].status.hostIP}")$echoopenhttp://$HOST_IP:$(kubectlgetsvcl5d-nl5d-system-o'jsonpath={.spec.ports[1].nodePort}')
Then, through the kubernetes.io/ingress.class: "linkerd" annotation, you can facilitate the linkerd ingress controller:
You can use Linkerd in two ways: HTTP proxy and linkerd-inject.
HTTP Proxy
When using Linkerd, set HTTP proxy for the application as:
HTTP uses $(NODE_NAME):4140
HTTP/2 uses $(NODE_NAME):4240
gRPC uses $(NODE_NAME):4340
In Kubernetes, to get NODE_NAME, you can use the Downward API. For instance,
[Code snippet Omitted for Length - See original reference.]
linkerd-inject
# install linkerd-inject$gogetgithub.com/linkerd/linkerd-inject# inject init container and deploy this config$kubectlapply-f<(linkerd-inject-f<yourk8sconfig>.yml-linkerdPort4140)