Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Gliffy
imageAttachmentIdatt56426560
nameSOAJS Cloud
diagramAttachmentIdatt56426559

Here is a visualization of the SOAJS cloud once deployed. 

The entry point to the cloud is Nginx. Remember that Nginx is the web server we configured to forward all incoming traffic to the controller. 

Moreover, we have multiple instances of Nginx as well as multiple instances of the controller.

Cloud Overview


An Environment is represented by a cloud of micro-services, and multiple environments form up a product ( dev - stg - prod ).

To visualize this behavior better, the image below shows how one environment cloud behaves internally.

The cloud entry point is the nginx layer which can consist of N instances. When a user makes a request to the domain, the nginx is the first to pick it up.

The nginx then upstreams the request arriving at port 80 to the controller layer that is listening on port 4000.

The controller layer can also contain N instances and once one of them receives the request from the Nginx, it analyzes it and redirects it to the corresponding micro-service so that the later can handle it.

The micro-services can also contain N instances each for redundancy and the overall result is a high available cloud.

This redundancy serves as a load balancer to maintain system-wide high responsiveness and mitigate sluggishness.

After the request is forwarded to the controller, the controller then decapsulates the session to forward the request to the correct service. But this does not happen before some preliminary checks.

These preliminary checks serve two purposes:

  • a security feature that stops "bad" requests before the reach the APIs.
  • a performance feature that prevents "bad" requests from wasting the services' resources.

After the controller finishes its checks, to delivers the request via round-robin distribution to the next available instance of the required service.

Also note that the services themselves do not interact directly with each other, they always pass through the controller

Gliffy
imageAttachmentIdatt56426560
nameSOAJS Cloud
diagramAttachmentIdatt56426559

Internal Communication


In addition to how requests arriving from outside the cloud are handled, the cloud also provides a network flow for its micro-services to communicate with one another.

Since the controller is the only layer that knows if services are alive or not, then every service that needs to communicate with another service should do so via the cloud's controller(s).

The service first looks for an available controller to make the internal request, then it invokes it and pass on the inputs.

The same behavior in terms of load balancing is applied when internal cloud communication occurs.

The only difference is that services within the cloud communicate with one another using the private network IP range instead of going outside the cloud then back in.

This behavior explains why there are bidirectional white arrows inside the cloud between services and controllers whereas the arrow from nginx towards the controller is pointing in one direction only.