At the end of 2019, Istio announced its fourth consecutive quarterly release for the year, Istio 1.4. The release focuses on improving user experience and making it simpler for operators to manage their clusters. Added features and improvements include the new Istio operator, v1beta1
authorization policy, automatic mutual Transport Layer Security (TLS) support, and updates to istioctl
, as shown in the following graphic:
The following sections describe the highlights, and give you opportunities to walk through some examples. To learn the details about Istio 1.4, see the community release notes and the Istio documentation. As of today, the 1.4 release has three patch releases – 1.4.1, 1.4.2 and 1.4.3. These patches include bug fixes, improvements, and security updates. Also, check out Dan Berg’s 6-minute presentation video from serviceMeshCon: Dramatic UX Improvement and Analytics-driven Canary Deployment with Istio (1118-RM06-06.mp4), which gives a quick recap of the Istio 1.4 release.
Istio operator
One of the main highlights of Istio 1.4 is the addition of the Istio operator to improve the deployment and management of the Istio control plane. The intention is to replace Helm as the primary tool to install and upgrade Istio. Instead of managing multiple YAML files, you can now install Istio using istioctl
. The Istio operator uses a custom resource definition (CRD), IstioControlPlane to define a custom resource that you can use to customize the Istio control plane installation. The istioctl
installation command uses this same custom resource to configure the installation.
The Istio operator uses Istio controller (alpha release) to monitor the IstioControlPlane
custom resource. Any updates made to this custom resource are immediately applied to the Istio cluster.
The recommended way to install Istio is by using the following istioctl
command (specify a configuration profile to use for installing Istio using the --set
flag). The following example uses the demo profile:
istioctl manifest apply --set profile=demo
Automatic mutual TLS
Another very useful feature in Istio 1.4 is the automatic mutual TLS support, which simplifies mutual TLS adoption for services that are onboarded to Istio. You can use this new feature to adopt mutual TLS by only defining an authentication policy. You aren’t required to define DestinationRule objects. If you already have Istio installed, you can enable automatic mutual TLS using the following command (it redeploys Istio with automatic mutual TLS enabled):
istioctl manifest apply --set values.global.mtls.auto=true
Or, you can install Istio (with the demo profile) with automatic mutual TLS enabled:
istioctl manifest apply --set profile=demo --set values.global.mtls.auto=true
With this feature, it takes the burden off operators to track services that are migrated to Istio and adjust DestinationRules accordingly to enable or disable mutual TLS traffic. Istio automatically tracks server workloads with sidecars and configures clients with sidecars to send mutual TLS traffic to them and plain-text traffic to those workloads without sidecars.
The following example walks through deploying the Bookinfo
sample application into two namespaces – full and legacy and shows how automatic mutual TLS works. You install Istio using the demo profile with automatic mutual TLS enabled.
- Create a
full
namespace and deploy theBookinfo
sample application with sidecar injection enabled (workloads in this namespace can serve both plain text and mutual TLS traffic):kubectl create ns full kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml) -n full
- Create
legacy
namespace and deploy theBookinfo
sample application without sidecar injection enabled (workloads in this namespace can only serve plain text traffic):kubectl create ns legacy kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -n legacy
- Try accessing the
productpage
of theBookinfo
app in afull
namespace from theratings
pod in thelegacy
namespace. Note that all services in the mesh start with PERMISSIVE mode, which means that services in thefull
namespace can serve both mutual TLS traffic and plain text traffic. Therefore, theproductpage
service in thefull
namespace can accept plain text traffic from theratings
service in the legacy namespace:kubectl exec -it $(kubectl get pod -n legacy -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -n legacy -- curl http://productpage.full:9080/productpage | grep -o "<title>.*</title>" <title>Simple Bookstore App</title>
- Access the
productpage
in thefull
namespace from theratings
pod in thefull
namespace. Automatic mutual TLS configures theratings
service to send mutual TLS traffic to theproductpage
service in thefull
namespace.kubectl exec -it $(kubectl get pod -n full -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -n full -- curl http://productpage.full:9080/productpage | grep -o "<title>.*</title>" <title>Simple Bookstore App</title>
- Configure the authentication policy for the
productpage
service in thefull
namespace to be in STRICT mode instead of PERMISSIVE. This means that theproductpage
service can only receive mutual TLS traffic:cat <<EOF | kubectl apply -n full -f - apiVersion: "authentication.istio.io/v1alpha1" kind: "Policy" metadata: name: "productpage" spec: targets: - name: productpage peers: - mtls: {} EOF
- Try to access the
productpage
in thefull
namespace from theratings
pod in thelegacy
namespace. Notice that it fails, becauseratings
pod in thelegacy
namespace can only send plain text traffic:kubectl exec -it $(kubectl get pod -n legacy -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -n legacy -- curl http://productpage.full:9080/productpage | grep -o "<title>.*</title>" command terminated with exit code 56
- Try accessing the
productpage
in thefull
namespace from theratings
pod in thefull
namespace. This will work since automatic mutual TLS configures theratings
pod to send mutual TLS traffic to theproductpage
service without requiring the user to define aDestinationRule
object.kubectl exec -it $(kubectl get pod -n full -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -n full -- curl http://productpage.full:9080/productpage | grep -o "<title>.*</title>" <title>Simple Bookstore App</title>
v1beta1 Authorization Policy
In Istio, authorization policies control access to workloads in a cluster. In Istio 1.4, the Istio community introduced the v1beta1 authorization policy, which is a significant redesign from the previous v1alpha1
role-based access control (RBAC) policy. The new authorization policy is now aligned with the Istio Configuration Model, which helps improve usability by simplifying the API. Using the previous v1alpha1
API, you needed to define three configuration resources – ClusterRbacConfig
, ServiceRole
and ServiceRoleBinding
– to enforce access control on services. With the v1beta1
version, you can achieve the same access control by defining one authorization policy object.
The following example shows how to define an authorization policy object for the reviews service of the bookinfo
app:
kubectl apply -f - <<EOF apiVersion: "security.istio.io/v1beta1" kind: "AuthorizationPolicy" metadata: name: "reviews-viewer" namespace: default spec: selector: matchLabels: app: reviews namespace: defa rules: - from: - source: principals: ["cluster.local/ns/default/sa/bookinfo-productpage"] to: - operation: methods: ["GET"] EOF
You use the selector and rules section under spec
to define the AuthorizationPolicy
for a specific workload. The selector
specifies which workload this authorization policy applies to (in this case, it applies to reviews
service in the default
namespace). The rules
section defines which workloads are allowed to access the reviews
service (using the from:source:principals
section) and which operations to use (as specified in the to:operation:methods
section). In the previous example, only GET requests from the bookinfo-productpage
workload are allowed to access the reviews
service. All other requests are denied.
You use the selector and rules section under spec
to define the AuthorizationPolicy
for a specific workload. The selector
specifies which workload this authorization policy applies to (in this case, it applies to reviews
service in the default
namespace). The rules
section defines which workloads are allowed to access the reviews
service (using the from:source:principals
section) and which operations to use (as specified in the to:operation:methods
section). In the previous example, only GET requests from the bookinfo-productpage
workload are allowed to access the reviews
service. All other requests are denied.
In the following example, you can continue using the Bookinfo
app deployed in the two different namespaces – full
and legacy
to show how you can use the latest v1beta1
authorization policy to grant access to services in a namespace.
- Define an authorization policy which denies all traffic to any workload in both the
full
andlegacy
namespaces. In the following example, thespec
field is empty, which means it denies all traffic to the workloads in the given namespace.kubectl apply -f - <<EOF apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all-legacy namespace: legacy spec: {} EOF
- Try accessing the
productpage
in thefull
namespace from theratings
pod in both thelegacy
andfull
namespaces. Both these requests fail, because access is denied to all services in thefull
namespace.kubectl exec -it $(kubectl get pod -n legacy -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -n legacy -- curl http://productpage.full:9080/productpage RBAC: access denied
- Allow GET request access to the
productpage
service in thefull
namespace from any workload. In this case, you didn’t specify thefrom.source
fields, so theproductpage
service in thefull
namespace can accept GET requests from any workload (in any namespace):kubectl apply -f - <<EOF apiVersion: "security.istio.io/v1beta1" kind: "AuthorizationPolicy" metadata: name: "productpage-viewer" namespace: full spec: selector: matchLabels: app: productpage rules: - to: - operation: methods: ["GET"] EOF
- Try to access the product page in the full namespace from the ratings pod in the legacy namespace. This will display the productpage but you can see that the details and reviews section of the product page is empty. This is because the deny-all-full authorization policy still applies to the details, reviews and ratings services. So far we have only allowed access to the productpage service in the full namespace.
<title>Simple Bookstore App</title> …… …… <div class="row"> <div class="col-md-6"> <h4 class="text-center text-primary">Error fetching product details!</h4> <p>Sorry, product details are currently unavailable for this book.</p> </div> <div class="col-md-6"> <h4 class="text-center text-primary">Error fetching product reviews!</h4> <p>Sorry, product reviews are currently unavailable for this book.</p> </div> </div>
- To allow the
productpage
service to access thedetails
service in thefull
namespace, create another authorization policy for thedetails
service in thefull
namespace. In theselector
section, specify the sources that can access thedetails
service. The source can be a service account or a namespace:kubectl apply -f <<EOF apiVersion: "security.istio.io/v1beta1" kind: "AuthorizationPolicy" metadata: name: "details-viewer" namespace: full spec: selector: matchLabels: app: details rules: - from: - source: principals: [“cluster.local/ns/full/sa/bookinfo- productpage"] to: - operation: methods: ["GET"] EOF
- Try accessing the
productpage
in thefull
namespace from theratings
pod in thelegacy
namespace and you will be able to see the details section. The reviews section is still empty. To get the reviews and ratings, you will have to create similar authorization policies for thereviews
andratings
services in thefull
namespace allowing access from theproductpage
service.<div class="row"> <div class="col-md-6"> <h4 class="text-center text-primary">Book Details</h4> <dl> <dt>Type:</dt>paperback <dt>Pages:</dt>200 <dt>Publisher:</dt>PublisherA <dt>Language:</dt>English <dt>ISBN-10:</dt>1234567890 <dt>ISBN-13:</dt>123-1234567890 </dl> </div> <div class="col-md-6"> <h4 class="text-center text-primary">Error fetching product reviews!</h4> <p>Sorry, product reviews are currently unavailable for this book.</p> </div> </div>
Updates to istioctl
Several new istioctl
experimental subcommands in Istio 1.4 help operators manage the mesh. One is the analyze command. This command helps you analyze a live cluster for problems with configuration and validate new configuration files prior to deploying them:
istioctl x analyze -k
The following experimental subcommands were also added in Istio 1.4: create-remote-secret and wait.
Other enhancements
Another area to highlight in Istio 1.4 release is the work in improving mixer-less telemetry support. The in-proxy generation of HTTP metrics graduated from experimental to alpha. Also, experimental support was added for TCP metrics.
The Istio client-go library is a newly released go library in Istio 1.4 that helps developers programmatically access the Istio APIs in a Kubernetes cluster. Using this library, you can generate controllers to perform CRUD operations on Istio custom resources in a Kubernetes cluster.
Istio 1.4 includes improvements to Envoy’s feature set, including the added ability to mirror traffic to a percentage of incoming traffic instead of the default 100 percent.
kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: httpbin spec: hosts: - httpbin http: - route: - destination: host: httpbin subset: v1 weight: 100 mirror: host: httpbin subset: v2 mirror_percent: 80 EOF
Istio and IBM
The Istio project saw tremendous growth and adoption since its launch in 2017. As one of the co-founders of the Istio project (along with Google and Lyft), IBM actively contributed to the project since its launch. With more than 300 companies contributing to the project, IBM is ranked second in terms of contributions to the project across the different releases. In Istio 1.4, IBMers worked in the design and implementation of key features including automatic mutual TLS and the Istio Operator, new istioctl
features, and updates to improve usability and operation. IBM also contributed to the Virtual Machine Mesh expansion to connect virtual machine workloads to containers. IBM has several active maintainers co-leading different workgroups in Istio, serving on the Istio Technical Oversight Committee, and serving on the Istio Steering Committee.
IBM provides Istio on the IBM Cloud Kubernetes Service as a managed add-on service, which automatically keeps all Istio components up to date. IBM announced the general availability of managed Istio on the IBM Cloud Kubernetes Service in November 2019.
Summary
Looking back at 2019, the Istio project grew tremendously in terms of ecosystem and community. Also, more companies adopted Istio in production. Istio was named one of the top five fastest growing open source projects in all of GitHub in 2019. The community made many improvements to Istio over the four releases in 2019, with a focus on improving performance and usability and making it simpler for operators to deploy, manage, and troubleshoot. We look forward to an exciting new year and many more accomplishments for the Istio project and the community in 2020.
Mariam John, Software Engineer, IBM Cognitive Applications, originally published this post on the IBM Developer blog.
The post Istio 1.4 improves user experience and simplifies managing clusters appeared first on JAXenter.
Source : JAXenter