If you are new to cloud-native and Kubernetes, creating and deploying an application can be a bit daunting. Your first challenge: create a healthy cloud-native architecture. Then, figure out how to map that architecture onto Kubernetes. Finally, make sure your app scales – including storage. After all, the whole reason for creating a cloud-native app is to scale when demand increases, and to zero when unused, right?
In this article, you’ll learn how to conquer these challenges by deploying a Java Spring framework reference app named Pet Clinic. This is an app a fictional pet clinic business might use to scale its operations to a national or global level. We’ll extend the standard Pet Clinic app in a few ways to make it truly scalable:
- First, we’ll leverage the power of Kubernetes, which makes deploying and scaling cloud-native apps relatively easy.
- Second, we’ll use Apache Cassandra as the backend store, because Cassandra is a world-class distributed scalable database.
- Additionally, we’ll use a Reactive implementation of Pet Clinic to demonstrate the power and scalability of Reactive systems.
SEE ALSO: Kubernetes Is Much Bigger Than Containers: Here’s Where It Will Go Next
Spring Pet Clinic Example App
The Pet Clinic app keeps track of entities like pet owners, veterinarians, pet types, and veterinary specialties. The following screenshot shows the user interface.
The Pet Clinic app is divided into frontend and backend services. The front end provides the user interface and the backend uses microservices to persist data with a CRUD (Create, Read, Update, Delete) interface. This design follows cloud-native best practices and allows each service to be scaled independently.
In this example, you’ll learn to deploy a scalable version of the Pet Clinic app using Kubernetes. This version of the app uses the same frontend as the standard version, but the backend has been modified to use Cassandra and reactive programming.
Apache Cassandra is an open-source distributed NoSQL database that is ideal for many types of cloud-native applications. Cassandra can scale massively and supports multi-region datacenters. The easiest way to experiment with Cassandra is to use DataStax Astra (with free credit), a Cassandra-as-a-service offering, and you can watch a replay of our workshop deploying Pet Clinic with Astra on YouTube. However, for the purposes of this example, we’ll deploy the entire application (including the database) on Kubernetes.
K8ssandra
Wait, databases for customer-facing apps run on Kubernetes? It may be hard to believe, but there are a few cloud native databases, like Apache Cassandra, that shine in a containerized environment. Starting a distributed database like Cassandra on Kubernetes from scratch can be a challenge. K8ssandra makes it easy. K8ssandra is an integrated collection of open source tools for running Cassandra in your Kubernetes cluster, including the following:
- Cass-operator, a Kubernetes operator, which helps deploy, scale, and manage Cassandra clusters
- Monitoring dashboards based on Prometheus and Grafana to get a single view of your distributed system
- Medusa, a tool for backing up and restoring data
- Reaper, a tool for maintaining distributed consistency
- Stargate, a data gateway providing standard APIs for Cassandra including REST, GraphQL and document-style.
Deploy it Yourself!
First, you’ll need a Kubernetes cluster to deploy on. If you’re just running on your desktop, you can use a Kubernetes Kind cluster. Think of Kind as a Kubernetes cluster simulator, where the entire Kubernetes cluster runs inside a single Docker image. Kind is easy to install and use, and allows you to simulate an entire Kubernetes cluster on a single machine.
Note: You can find instructions and minimum requirements (don’t forget to check these!) for local installations including Kind on the K8ssandra site. There are also instructions for deploying K8ssandra on Kubernetes implementations such as Google Kubernetes Engine (GKE) and others.
Once you have a running Kubernetes cluster, you can go to work. The first step is to deploy K8ssandra. Helm makes this easy. Here’s how to do it:
helm repo add k8ssandra https://helm.k8ssandra.io/stable helm repo update helm install k8ssandra k8ssandra/k8ssandra -f k8ssandra.yaml
k8ssandra
is the name of the deployed instance.k8ssandra/k8ssandra
references the Helm repo used.k8ssandra.yaml
is a configuration file that customizes the K8ssandra install – in this case, for a development environment.
Let’s have a look at the configuration file, which you can also find under the K8ssandra examples directory in the main K8ssandra repo on GitHub.
cassandra: version: "3.11.10" cassandraLibDirVolume: storageClass: local-path size: 5Gi allowMultipleNodesPerWorker: true heap: size: 1G newGenSize: 1G resources: requests: cpu: 1000m memory: 2Gi limits: cpu: 1000m memory: 2Gi datacenters: - name: dc1 size: 1 racks: - name: default stargate: replicas: 1 heapMB: 256 cpuReqMillicores: 200 cpuLimMillicores: 1000
This configuration file overrides a few K8ssandra defaults to create a more resource-limited cluster suitable for demonstration and development work.
It’s a good idea to wait until the Cassandra cluster is ready before starting your application. Since Cass-operator creates a custom resource of type cassandradatacenter
, we can track its status like this:
kubectl wait --for=condition=Ready cassandradatacenter/dc1 --timeout=240s
Then, you can deploy the Pet Clinic app with a kubectl command:
kubectl apply -f petclinic.yaml
This command tells Kubernetes to make your system look like what is described in the petclinic.yaml
manifest file. As above, you can find this file under the K8ssandra examples on GitHub. Let’s break this up a bit.
The first section creates a deployment for the backend. Note the use of environment variables to pass Cassandra connection information to the backend. This includes Cassandra username/password credentials that K8ssandra creates and stores in a secret.
apiVersion: apps/v1 kind: Deployment metadata: name: petclinic labels: app: petclinic spec: replicas: 1 selector: matchLabels: app: petclinic-backend template: metadata: labels: app: petclinic-backend spec: containers: - name: petclinic-backend image: "datastaxdevs/petclinic-backend" env: - name: CASSANDRA_USER valueFrom: secretKeyRef: name: k8ssandra-superuser key: username - name: CASSANDRA_PASSWORD valueFrom: secretKeyRef: name: k8ssandra-superuser key: password - name: CASSANDRA_CONTACT_POINTS value: "k8ssandra-dc1-service:9042" - name: CASSANDRA_LOCAL_DC value: "dc1" - name: CASSANDRA_KEYSPACE_CREATE value: "true" - name: CASSANDRA_KEYSPACE_NAME value: "spring_petclinic" - name: CASSANDRA_KEYSPACE_CQL value: "CREATE KEYSPACE IF NOT EXISTS spring_petclinic WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'dc1' : 1 };"
This configuration also directs the Pet Clinic backend to create the Cassandra keyspace in which it will store data. The replication strategy shown here is sufficient for demonstration purposes in a single node cluster but should be modified for a production environment.
The second section creates a Kubernetes service to expose the backend. This service provides network load balancing endpoints. For the purposes of this example, the service is configured with the default type (ClusterIP
) to provide access to the backend within the Kubernetes cluster. A production deployment would make use of Kubernetes ingresses.
kind: Service apiVersion: v1 metadata: name: petclinic-backend spec: #type: NodePort selector: app: petclinic-backend ports: # Default port used by the image - port: 9966
The third section creates a deployment for the frontend. This deployment is pretty simple, as the frontend is configured to look for the backend by name.
apiVersion: apps/v1 kind: Deployment metadata: name: petclinic-frontend labels: app: petclinic-frontend spec: replicas: 1 selector: matchLabels: app: petclinic-frontend template: metadata: labels: app: petclinic-frontend spec: containers: - name: petclinic-frontend image: "datastaxdevs/petclinic-frontend"
The fourth and final section creates a Kubernetes service exposing the frontend. Again, this service is only exposed within the Kubernetes cluster. You’ll see in a moment how to expose this externally to the Kubernetes cluster so you can see the user interface.
kind: Service apiVersion: v1 metadata: name: petclinic-frontend spec: selector: app: petclinic-frontend ports: - port: 8080
Once you’ve deployed the Pet Clinic application, you can temporarily expose the port for the user interface using port forwarding:
kubectl port-forward service/petclinic-frontend 8080:8080
Then, you can access the user interface at https://localhost:8080. Experiment with adding and updating data. If you want to get more adventurous, try killing the Cassandra pod (it will have a name like k8ssandra-dc1-default-sts-0), and you’ll see that your data is still available after Kubernetes replaces the node.
SEE ALSO: Kubernetes-Managed Object Storage for the Win
Run this example in your browser
If you are lacking time or resources to do the full installation we’ve described here, another option is available. This free 15 minute tutorial on the DataStax developer site explains more detail about the Pet Clinic application architecture and includes a hands-on learning exercise: a Katacoda scenario that runs a full development environment right in your browser window. This tutorial is part of a full course that you can take and use to prepare for a Cassandra on Kubernetes certification exam. The Katacoda exercise also includes steps to let you investigate the dashboards provided by Prometheus and Grafana, and access the backend using Stargate’s REST API.
Conclusion
Of course, cloud-native apps can be very sophisticated. But if you are new to cloud-native apps and Kubernetes, this Pet Clinic example is a great place to begin. This example demonstrates all the main concepts you need to build scalable cloud-native Kubernetes apps using K8ssandra – a great starting point for your adventures in cloud-native scalable apps with K8ssandra!
The post Deploy a Cloud-native Java App on Kubernetes Using Spring and K8ssandra appeared first on JAXenter.
Source : JAXenter