The Kaa platform architecture overview in 2.5 minutes.
The Kaa platform is a cloud-native Internet of Things platform.
The architecture of the platform rests upon the microservice approach and uses it to the fullest. Each Kaa microservice is an independent building block. Such architecture allows to mix and match these blocks to create coherent solutions.
On the scale of the whole platform, Kaa microservices are just a bunch of black boxes doing their job. This means that the architecture of any individual microservice is not significant to the architecture of the whole Kaa platform.
To achieve this kind of microservice abstraction, Kaa engineers use a number of techniques.
First, all inter-service communication protocols use HTTP and NATS to transport messages, and JSON and Avro to encode them. All these technologies are well-defined and have multiple implementations for all mainstream programming languages, so we’re not tied to any implementation language. At present, most Kaa microservices are written in Java, Go, and TypeScript (NodeJS).
Second, all Kaa microservices are packaged as Docker images. Docker effectively abstracts away all the microservice setup and runtime dependencies—running a Docker container with Java is no different from running a Go-powered Docker container. This helps operations team deploy Kaa solutions without having to set up the dependencies.
Third, on top of Docker we use container orchestration systems, Kubernetes being the preferred one. Numerous ecosystem projects, such as Helm, NGINX, NATS, Prometheus, Grafana, and many others improve the Kaa user experience and help operating Kaa-based solutions at any scale and complexity.
Below is a diagram of how Kaa microservices are typically composed:
Combined with well-defined and documented interfaces, these techniques allow us to swap the whole microservice implementation without anyone ever noticing.
To make microservices composable, the Kaa platform uses well-defined NATS-based protocols. We use a lightweight change management procedure to develop these protocols, and track them separately from the microservice implementations. This allows multiple implementations of a single protocol to co-exist and cooperate within a single solution.
Main inter-service communication guidelines are defined in 3/ISM (Inter-Service Messaging) RFC. All other inter-service protocols build on 3/ISM and usually define one or two roles. For example, 4/ESP (Extension Service Protocol) defines “communication service” and “extension service” roles, and 6/CDTP (Configuration Data Transport Protocol) defines “configuration data provider” and “configuration data consumer” roles.
That’s extremely useful as it allows each role to have multiple diverse implementations.
For example, we can have multiple “communication service” implementations, each implementing a different client-facing protocol: MQTT, CoAP, HTTP, proprietary UDP-based protocol—the only requirement for the service is implementing “communication service” side of the 4/ESP. This allows swapping client communication layer easily without affecting any other service—it’s completely transparent. Furthermore, you can deploy multiple communication service implementations to handle clients which communicate over different protocols within a single solution.
Before we describe the scalability features of the Kaa platform, let’s define some terms first.
Kaa service is a microservice packed in a Docker image.
Service instance is a Kaa service plus its configuration.
To make a service do something useful, you need to deploy at least one service instance replica (or simply replica)—a running Docker container.
Service | Service instance | Service instance replica |
---|---|---|
|
|
|
Each service instance may have as many replicas as needed to handle the load. Most service replicas are independent and do not communicate with each other. They have neither master-slave nor master-master relationship.
Instance replicas leverage NATS queue groups, so any request directed to a service instance can be handled by any replica. There is no single point of failure.
All Kaa services can be scaled horizontally. Many services do not share any data between the replicas at all; others share the state using Redis or other data storage. In all cases, handling horizontal scalability is internal to each service. In most cases, it boils down to scaling data stores; and we have paid attention to select data stores that scale well.
The Kaa platform leverages Kubernetes as an enterprise-grade orchestrator platform for all the solutions. It lets you abstract from managing containers lifecycle, mitigation of node failures, networking, and much more, keeping the focus on the business domain only.
Kubernetes is built around declarative descriptors, meaning that you define what you need and Kubernetes figures out how to get it on its own. The declarative approach gives you the flexibility of where you can run the cluster without changing a single line of code in your application.
This allows to run Kaa almost anywhere: from a private bare-metal cluster, to a public cloud like Amazon AWS or Google’s GCP.