Cloud Overview
An Environment is represented by a cloud of micro-services, and multiple environments form up a product (dev - stg - prod).
To visualize this behavior better, the image below shows how one environment cloud behaves internally.
The cloud entry point is the nginx layer which can consist of N instances. When a user makes a request to the domain, the nginx is the first to pick it up.
The nginx then upstreams the request arriving at port 80 to the controller layer that is listening on port 4000.
The controller layer can also contain N instances and once one of them receives the request from the Nginx, it analyzes it and redirects it to the corresponding micro-service so that the later can handle it.
The micro-services can also contain N instances each for redundancy and the overall result is a high available cloud.
This redundancy serves as a load balancer to maintain system-wide high responsiveness and mitigate sluggishness.
Internal Communication
In addition to how requests arriving from outside the cloud are handled, the cloud also provides a network flow for its micro-services to communicate with one another.
Since the controller is the only layer that knows if services are alive or not, then every service that needs to communicate with another service should do so via the cloud's controller(s).
The service first looks for an available controller to make the internal request, then it invokes it and pass on the inputs.
The same behavior in terms of load balancing is applied when internal cloud communication occurs.
The only difference is that services within the cloud communicate with one another using the private network IP range instead of going outside the cloud then back in.
This behavior explains why there are bidirectional white arrows inside the cloud between services and controllers whereas the arrow from nginx towards the controller is pointing in one direction only.
Add Comment