iconik Microservices
iconik uses a microservices architecture that means that each service is a small contained unit of functionality concerned only with its business function and they are then composed to together to form the overall service.
This gives flexibility in deployment allows for scaling individual parts of the application either dynamically on-demand or as needed to meet the needs of customer load on the system.
Each microservice is packaged in a Docker container with a minimal Alpine Linux distribution only with the resources that it needs to maintain a small profile. These containers are built using an automated build system and deployed periodically as part of a release or a bug fix deployment.
Kubernetes is used to automate the deployment, scaling and management of the containers. On Google Cloud we utilise the Google Kubernetes Engine service to take care of the Kubernetes infrastructure.
Kubernetes is given a cluster of hosts (using Google Compute Engine on Google Cloud) on which to dynamically schedule the Docker containers into Pods. Kubernetes uses an internal IP address schema for communication which is not publicly accessible.
Ingress and Egress is through Kubernetes managed ingress object that is then load balanced with a L7 global load balancer running on Google Cloud's premium network tier.
Internal logging and monitoring of GKE, containers and compute engine nodes is managed by Google Cloud Stackdriver.
Microservice API Documentation
Each microservice is self-documenting of it's own API, and this is presented through the Application Gateway microservice and published at https://app.iconik.io/docs/