Tech Insights
FogAtLas architecture & models
The following figure shows the FogAtlas main components with their interactions.
FogAtlas high level architecture
Developer models an application as a graph of microservices, imposing requirements on the single microservices (e.g., cpu and/or memory requests) and on the data flows among two or more microservices (e.g. throughput and/or latency).
FogAtlas Controller gets the application graph and the infrastructure status (both implemented by K8s Custom Resource Definition) and sends them to a Placement Algorithm.
Placement Algorithm computes the placement of the microservices according to its objective function and the imposed constraints. Scores per node per microservice are sent to the Scheduler Plugin.
Scheduler Plugin acts at the "Score" and "Normalize Score" extension points of the K8s Scheduling Cycle and influences the vanilla K8s placement adding score obtained by the Placement Algorithm. Microservices are deployed on the K8s cluster according to the scores assigned.
Through a monitoring chain based on Prometheus, metrics are collected and then evaluated by FADSReq Controller against the thresholds imposed.
Once a threshold violation is detected, corrective actions (e.g. scaling) are executed.
The application is re-submitted to FogAtlas.
The whole FogAtlas is based on two models, one describing the distributed infrastructure and one modelling an application as a graph of microservices. The following figures present both of them.
FogAtlas infrastructure and application models (high level)
Each Region (either Cloud or Fog) hosts Compute Nodes and offers External Endpoints (i.e. data sources like sensors or services offered by external providers). Compute Nodes host Microservices that in turn compose the Applications.
FogAtlas application model (detailed)
An Application is defined as a graph. The vertices of this graph can be External Endpoints or Microservices whereas the edges are DataFlows.