Linkerd is an exceptional service mesh component designed for cloud-native applications and is amongst the CNCF (Cloud Native Computing Foundation) projects. It offers a uniform management and control plane for inter-service communication, decoupling the application logic from communication mechanisms. Remarkably, this enables you to gain visual control over service communication without altering your application. Additionally, Linkerd instances are stateless, and you can swiftly deploy them in two ways: one instance per application (sidecar) or one instance per Node.
The laudable features that Linkerd brings to the table include:
Service Discovery
Dynamic request routing
Integration with HTTP proxy, with support for protocols like HTTP, TLS, gRPC, HTTP/2, etc.
Latency-aware load balancing that supports numerous load-balancing algorithms, such as Power of Two Choices (P2C) Least Loaded, Power of Two Choices (P2C) peak ewma, Aperture: least loaded, Heap: least loaded, Round robin, and more.
A robust circuit breaker mechanism, which automatically eliminates unhealthy backend instances, including fail-fast (instance removal upon connection failure) and failure accrual (marks as failed and reserves recovery time when more than 5 request handling fails)
Distributed tracing and metrics
Under the Hood of Linkerd (Linkerd Principle)
Linkerd breaks down request handling into multiple steps –
(1) IDENTIFICATION: Assigning a logical name (i.e., the target service of the request) to the actual request, such as assigning the name '/svc/example' to the HTTP request 'GET http://example/hello' by default.
(2) BINDING: dtabs are responsible for binding the logical name with the client name, with client names always beginning with '/#' or '/$', like:
# Assuming dtab is
/env => /#/io.l5d.serversets/discovery
/svc => /env/prod
# Then the service name /svc/users will be bound as
/svc/users
/env/prod/users
/#/io.l5d.serversets/discovery/prod/users
(3) RESOLUTION: namer resolves the client name, eventually unearthing the actual service address (IP + port)
(4) LOAD BALANCING: Deciding on how to send requests according to the load-balancing algorithm.
Deploying Linkerd
Linkerd is deployed as a DaemonSet on every Node node:
# For CNI, deploy linkerd-cni.yml instead.
# kubectl apply -f https://github.com/linkerd/linkerd-examples/raw/master/k8s-daemonset/k8s/linkerd-cni.yml
kubectl create ns linkerd
kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/servicemesh.yml
$ kubectl -n linkerd get pod
NAME READY STATUS RESTARTS AGE
l5d-6v67t 2/2 Running 0 2m
l5d-rn6v4 2/2 Running 0 2m
$ kubectl -n linkerd get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP POR AGE
l5d LoadBalancer 10.0.71.9 <pending> 4140:32728/TCP,4141:31804/TCP,4240:31418/TCP,4241:30611/TCP,4340:31768/TCP,4341:30845/TCP,80:31144/TCP,8080:31115/TCP 3m
By default, Linkerd's Dashboard listens at port 9990 of each container instance (note that it is not exposed in l5d service), and can be accessed through the corresponding port of the service.
kubectl -n linkerd port-forward $(kubectl -n linkerd get pod -l app=l5d -o jsonpath='{.items[0].metadata.name}') 9990 &
echo "open http://localhost:9990 in browser"
Tools: Grafana and Prometheus
$ kubectl -n linkerd apply -f https://github.com/linkerd/linkerd-viz/raw/master/k8s/linkerd-viz.yml
$ kubectl -n linkerd get svc linkerd-viz
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
linkerd-viz LoadBalancer 10.0.235.21 <pending> 80:30895/TCP,9191:31145/TCP 24s