FogAtlas (evolution of the former Foggy platform) is a software framework aiming to manage a geographically distributed and decentralized Cloud Computing infrastructure that provides computational, storage and network services close to the data sources and the users, embracing the Fog Computing paradigm. FogAtlas is able to manage the so called Cloud-to-Thing Continuum offering service-aware workload placement and zero-touch deployment. It is an evolution of the well known paradigms of IaaS and PaaS adding the concept of “locality” to the traditional Cloud Computing model and easing the operations of a Fog Computing infrastructure.
FogAtlas is built on top of Kubernetes and other Open Source technologies, namely, Ansible, Prometheus and Grafana.
Cloud Computing becomes distributed and decentralized
Nowadays Cloud Computing is a well consolidate technology offering a variety of services and functionalities to customers operating in different verticals willing to get on-demand computational resources through a convenient pay-as-you-go model. Cloud Computing (i.e. the Public Clouds) is by its nature “centralized”: huge amount of resources are concentrated in few big data centers so as to increase the operational efficiency and to lower the complexity of the architectural solutions.
However such a centralization has some limitations: in the last years, thanks to the Internet of Things (IoT), huge amount of data is produced in remote locations and its elaboration requests data to be moved from those locations to the central Cloud. Such a transfer can create issues in terms of inefficiencies (massive utilization of network bandwidth, delays due to the network latency), availability and robustness (network partitioning), data privacy and security and more generally a suboptimal usage of the resources. A similar problem has been already seen in the context of commuting: we all are familiar with the traffic jam generated by commuting from the suburbs to downtown. If commuting problem can be tackled with the introduction of moderns ways of working, namely “smart working”, “agile working”, “work from home”, similarly data transfer problem can be tackled distributing the computational resources close to the sources of data allowing at least a pre-elaboration and a first aggregation where data is produced. Therefore Cloud Computing must escape from the traditional centralized data centers and offer its services closer to the users or to the data sources embracing a more distributed and decentralized paradigm.
The advent of Fog Computing
Different names (with slightly different meanings) have been proposed for such a new distributed Cloud Computing paradigm, namely Fog Computing and Edge Computing.
Fog Computing is a relatively new approach that aims to extend the concepts of Cloud Computing (not to replace it) in order to offer cloud services and functionalities throughout the whole Cloud-to-thing Continuum, i.e. the different tiers offering computational, storage and network resources between the “real” world of the “things” and the traditional Cloud Computing ecosystem.
Essentially the advantages of Fog Computing can be summarized as follows:
Real time responses and low latency: network delays caused by communication with the central Cloud are removed;
Bandwidth usage: pre-processing data at the edge saves and optimizes bandwidth usage;
Fault tolerance: a decentralized architecture is more resilient in case of network faults;
Data privacy & security: user can decide which data to keep on his/her premises and which data to send to the Cloud.
The scenarios and verticals where Fog Computing can be applied are disparate:
Autonomous Driving
Industry 4.0
Smart Cities
Healthcare
Utilities, Energy
Agriculture
Tactile internet, robotics
and many more.
Looming new challenges and issues
The Fog/Edge paradigm opens new challenges and increases the complexity of the management and operations of a computing infrastructure with respect to a traditional centralized Cloud Computing environment: distribution and decentralization must be addressed, application deployment, resource allocation and workload placement must be conceived in a heterogeneous, widespread environment where locality, context-awareness and network performances must be taken into account.
Current Cloud technologies and platforms have been mainly developed for managing big and centralized data center in an efficient and effective way but lack the ability to handle distributed and decentralized infrastructures.
Here comes FogAtlas. It tries to answer to these new needs providing a platform that offers IaaS and PaaS functionalities for highly distributed, heterogeneous and decentralized infrastructures.
What is FogAtlas
FogAtlas is an architectural framework and a software platform for orchestrating cloud-native applications and helping the operations in a multi-tier, highly distributed, heterogeneous and decentralized Cloud Computing environment like the one foreseen by the Fog Computing paradigm. The main features offered are:
Set-up, monitoring, operations and fleet management of a multi-tier, distributed Cloud infrastructure;
Zero-touch deployment and orchestration of containerized applications, resource allocation and workload placement;
Therefore, the problems that FogAtlas aims to solve are mainly related with fleet management and workload orchestration in an environment where different tenants and/or applications compete for the same resources. Of course such a context adds many “degrees of freedom” with respect to similar problems in a centralized and homogeneous cluster of resources: resources and services belonging to distributed clusters are disparate, provide different capabilities and are widespread on different locations, connected to data centers (e.g. the Public Cloud) with a network that is not always reliable and/or able to guarantee requested performances. The process of scheduling the workload takes into account different parameters and embrace novel policies with respect to the ones currently used: for instance the location, the network characteristics, the computational profile and the kind/model of a given physical resource are taken into account in order to efficiently allocate resources and schedule the workload.
FogAtlas is based on Open Source technologies, namely Kubernetes and Ansible, properly extended and integrated with software developed by FBK RiSING team so as to handle with an unified approach not only Cloud Computing infrastructures but also Fog Computing ones.
Why FogAtlas
As explained, FogAtlas aims at easing the management and operations of a Fog Computing infrastructure extending current technologies more suited for the Cloud Computing context.
The potential stakeholders playing around FogAtlas are the following:
Infrastructure owners (e.g. cloud providers, sensor network owners) willing to manage efficiently and effectively a Fog Computing infrastructure
Developers of innovative, cloud-native and smart applications aiming at exploiting the advantages of an infrastructure that offers services distributed on the territory and close to the data sources.
As a real world example, let’s image a Smart City municipality owning a distributed infrastructure built around a set of cameras for monitoring the urban territory. In order to better exploit this infrastructure, it should be flexible enough in order to host diverse applications offering different functionalities ranging from security control to tracing of cars and pedestrian, to monitoring of public events. With current products on the market, the municipality needs to buy a different solution for each type of functionality and vertical. With the approach proposed by FogAtlas, the hardware (even low cost single board computers e.g. Raspberry PI, Nettop) can be virtualized and host different applications and services on-demand. Moreover the possibility to intelligently handle the resource allocation and the workload placement throughout the infrastructure allows for optimization of different parameters based on the needs of the owner: maximize the number of hosted applications vs maximize the compliance to the SLA. In this way a win-win situation is put in place involving both infrastructure owners willing to efficiently exploit their infrastructure and developers that offer innovative and smart applications to the final users.