Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Gliffy
imageAttachmentIdatt56426560
nameSOAJS Cloud
diagramAttachmentIdatt56426559

Here is a visualization of the SOAJS cloud once deployed. 

The entry point to the cloud is Nginx. Remember that Nginx is the web server we configured to forward all incoming traffic to the controller. 

Moreover, we have multiple instances of Nginx as well as multiple instances of the controller.

Internal Communications


An environment is represented by a cloud of microservices, and multiple environments form up a product (dev/stg/prod).

To visualize this behavior better, the image below shows how one environment cloud behaves internally.

The cloud entry point is the nginx layer which can consist of N instances. When a user makes a request to the domain, the nginx is the first to pick it up.

The nginx then upstreams the request arriving at port 80 or 443 to the Gateway layer that is listening on port 4000.

The Gateway layer can also contain N instances and once one of them receives the request from the Nginx, it analyzes it and redirects it to the corresponding microservice so that the later can handle it.

The microservices can also contain N instances each for redundancy and the overall result is a high available cloud.

This redundancy serves as a load balancer to maintain system-wide high responsiveness and mitigate sluggishness.

After the request is forwarded to the controller, the controller then decapsulates the session to forward the request to the correct service. But this does not happen before some preliminary checks.

These preliminary checks serve two purposes:

  • a security feature that stops "bad" requests before the reach the APIs.
  • a performance feature that prevents "bad" requests from wasting the services' resources.

After the controller finishes its checks, to delivers the request via round-robin distribution to the next available instance of the required service.

Also note that the services themselves do not interact directly with each other, they always pass through the controller.

In addition to how requests arriving from outside the cloud are handled, the cloud also provides a network flow for its microservices to communicate with one another.

Since the Gateway is the only layer that knows if services are alive or not, then every service that needs to communicate with another service should do so via the cloud's Gateway(s) unless interConnect is turned on then the Gateway will augment the request with the needed information empowering microservices to communicate directly while maintaining multitenancy.

Image Added