While the terminology is new, the basic premise of fog computing is classic decentralization whereby some processing and storage functions are better performed locally instead of sending data all the way from the sensor, to the cloud, and back again to an actuator. This reduces latency and reduces the amount of data that needs to be sent back and forth. Reducing latency improves the user experience for consumer applications, but in industrial applications it can improve response times for critical system functions, saving money, or lives.
This distributed approach improves security by reducing the amount of data that needs to be transmitted from the edge to the cloud, which also reduces power consumption and data network loading to enhance overall quality of service (QoS). Fog computing also strives to enable local resource pooling to make the most of what’s available at a given location, and adds data analytics, one of the fundamental elements of the IoT, to the mix.
The nuances of fog computing, in terms of network architecture and protocols required to fully exploit its potential, are such that groups such as the Open Fog Consortium have formed to define how it should best be done (Figure 1, above; The OpenFog Consortium is looking to determine the best architectural and programming approaches to ensure optimum distribution of functionality and intelligence from sensors to the cloud, and back. (Source: OpenFog Consortium) ).
Members of the consortium to date include Cisco, Intel, ARM, Dell, Microsoft, Toshiba, RTI, and Princeton University, and it is eager to harmonize with other groups including the Industrial Internet Consortium (IIC), ETSI-MEC (Mobile Edge Computing), Open Connectivity Foundation (OCF), and the OpenNFV. The consortium has already put out a white paper that will guide you through its current thought processes (you have to register to download it.)
Reliable sensors for fog computing
As fog computing rolls in, the onus is upon designers