Traffic Management
Istio is a pretty nifty tool when it comes to managing traffic. It can route on a dime, balance like a ballerina, and recover from failure with the resilience of a roly-poly bug. It even has a flair for the dramatic with its fault injection feature.

The duo responsible for these features is none other than Pilot and Envoy. They cater to all traffic going into and out of a container:
As the heartbeat of the system, Pilot manages and configures all the Envoy instances within the service mesh.
Envoy takes a more logistical role, maintaining load balancing and health check information. This enables it to evenly distribute traffic amongst target instances while strictly adhering to assigned routing rules.


Upgrading the application programming interface (API)
In versions of Istio preceding 0.7.X, config.istio.io/v1alpha2 was all that was available. With the release of 0.8.0, Istio designers elevated to networking.istio.io/v1alpha3. They also took the chance to rename a few traffic management resources:
Say goodbye to RouteRule and shake hands with
VirtualService. This feature now governs how service requests within the mesh are routed, taking factors like host, sourceLabels, and http headers into account. They've also got more tricks up their sleeve, including support for percentages, timeouts, retries, and error injection.DestinationPolicy got a facelift into
DestinationRule. It's the guideline for routing strategies, including circuit breakers, load balancing, and Transport Layer Security (TLS).EgressRule made way for
ServiceEntry. This cosmopolitan feature acknowledges services beyond the mesh walls, supporting two kinds: internal and external. Internal entries are akin to the in-house services, explicitly added to the mesh. External entries, on the other hand, signify services outside the mesh. When dealing with the latter, mutual TLS authentication is off-limits, and policy enforcement has to take place at the client end, unlike in-house services.Meet
Gateway, the new Ingress. It's all about harmonizing border network traffic load.
Service discovery and load balancing
In order to run the show, Istio presumes containers register with it upon launching (either manually or through the injection of an Envoy sidecar to the Pod). Once Envoy catches a request from the outside world, it executes load balancing. There are several balancing algorithms in its arsenal, such as round-robin, random, and smallest load. Moreover, Envoy regularly checks the health status of service back-end containers, automatically removing and restoring them in their times of illness and health respectively. Within the container, an HTTP 503 can also indicate its removal from load-balancing.

Take charge of your traffic
Istio operates under the assumption that all traffic entering and exiting the service network is transported through an Envoy proxy. The Envoy sidecar, armed with iptables, redirects traffic coming to the Pod and from it to the Envoy process monitored port (more specifically, port 15001):
Bouncing back from failures
In the face of adversity, Istio has a resilient set of recovery tools ready for deployment:
A timeout feature to avoid excessive waiting.
Retry procedures, with options to cap the maximum retry time and vary the retry intervals.
Health checks to automatically sideline any unfit containers.
Request limitations, such as the number of concurrent requests and connections.
Circuit-breaking to stop the spread of problems.
These features are all conveniently adjustable via VirtualService. For the user named "Jason", you can set it to return 500, while other users can access normally:
And here's how you can handle circuit braking:
Stirring up trouble
Who says Istio is just about smooth sailing? It can also deliberately induce failure to simulate real-world problems that applications may encounter. These mess-maker features, which can be configured through VirtualService, include:
Injecting delays for testing network latency and service overload.
Forcefully failing to test how the application deals with failure.
You can create a 2-second delay with the following configuration:
Rolling out the red carpet for "Canary deployments"

Start off with deploying bookinfo and configuring the default route for version v1:
Example 1: Route 10% of the traffic to version v2 and the remaining to version v1.
Example 2: All requests from user "Jason" go to version v2.
Example 3: Make the switch and move everything to version v2.
Example 4: Put a cap on concurrent access.
To see the effect of limited access times, you can use wrk to apply pressure to the application:
The Gateway
At deployment, Istio automatically bootstraps an Istio Gateway to oversee Ingress access.
To use TLS:
Egress traffic
By default, Istio commands both incoming and outgoing container traffic, meaning containers cannot access services outside the Kubernetes cluster. However, with the help of ServiceEntry, certain containers can gain egress access.
Note that the ServiceEntry feature only supports HTTP, TCP, and HTTPS. To address other protocols, IP ranges must be used through --includeIPRanges:
Looking in the mirror
Reference documents
最后更新于