Fog Computing: Definition, Advantages, Disadvantages, Use Cases
The internet of things, or IoT, has altered the IT environment globally. So-called smart manufacturing, more than the interconnectedness of daily objects, presents new difficulties for established cloud architectures. Under the context of Industry 4.0, IoT develops into a crucial technology for industrial facilities. The organization Smart Manufacturing Leadership Coalition (SMLC) is in charge of the public-private effort "smart manufacturing". The goal is for industrial plants and logistics networks to autonomously arrange work operations while increasing energy and production efficiency.
Unfortunately, many states are still not Industry 4.0 ready, and remote industrial facilities frequently lack the ultra-fast internet connections required for interconnectivity. The feasibility of the idea of smart manufacturing is questioned by the nonprofit organization Connected Nation, which details the difficulties of the country's existing plans for rural broadband growth. After all, an industrial plant that is completely networked produces several hundred terabytes of data each day. This leaves enormous volumes of data that cannot be centrally handled using well-established technologies or wirelessly downloaded from the cloud.
Fog computing is being used to investigate potential answers to these IoT implementation difficulties. In this article, we will discuss the following topics related to fog computing:
-
What is fog computing?
-
What are the key components of a fog computing architecture?
-
How Does Fog Computing Work?
-
What are the advantages of using fog computing over cloud computing?
-
What are the challenges of fog computing?
-
What are some of the applications and use cases for fog computing?
-
What are the differences between fog computing and cloud computing?
-
What is the difference between fog computing and mist computing?
-
What is the difference between edge computing and fog computing?
-
What are the best practices for fog computing?
-
What is the history of fog computing?
What is Fog Computing?
Data, processing, storage, and applications are spread between the data source and the cloud in a decentralized computing environment known as fog computing. Fog computing brings the benefits and power of the cloud closer to where data is produced and used. This is similar to edge computing. Since both entail moving processing and intelligence closer to where the data is produced, the words fog computing and edge computing are sometimes used interchangeably. Although fog computing could be done for security and regulatory concerns, this is frequently done to increase efficiency.
Similar to how fog concentrates around the edge of the network, the metaphor for fog is derived from the meteorological word for a cloud near the ground. The name is frequently linked to Cisco, and Ginny Nichols, the product line manager for the corporation, is said to have originated it. The general public can use fog computing; Cisco Fog Computing is a registered trademark.
Figure 1. What is Fog Computing?
What are the Key Components of a Fog Computing Architecture?
A fog computing system is implemented in a variety of ways. The following is a description of the elements shared by various fog computing architectures:
- Physical and virtual nodes, first (end devices): Whether they are application servers, edge routers, end devices like mobile phones and smartwatches, or sensors, end devices serve as the points of contact with the actual world. These gadgets cover a wide range of technologies and are data producers. This implies that they could have various underlying software and hardware, as well as storage and processing capacities that vary.
- Fog nodes: Independent devices called fog nodes collect the produced data. The three types of fog nodes are fog servers, fog gateways, and fog devices. These units keep the information needed while fog servers calculate it to determine the best course of action. Typically, fog servers are connected to fog devices. Data is transmitted between the different fog devices and servers using fog gateways. This layer is significant because it controls how quickly information is processed and how it flows. Knowledge of various hardware configurations, the devices they directly control, and network connectivity are necessary for setting up fog nodes.
- Data processors: Programs called data processors operate on fog nodes. The incorrect data that comes from end devices is filtered, reduced, and even rebuilt by them. The choice of whether to keep the data locally on a fog server or send it to the cloud for long-term storage is made by data processors. These processors homogenize information from several sources for simple transmission and communication. The other system components are provided with a standardized and programmable interface to do this. If one or more sensors fail, certain computers are capable of filling in the information using prior data. This eliminates all potential application failures.
- Monitoring services: Application programming interfaces (APIs) that track system performance and resource availability are typically included in monitoring services. Systems for monitoring ensure that all endpoints and fog nodes are operational and that communication is not stuttering. At times, it may be more expensive to hit the cloud server than to wait for a node to become available. Such situations are handled by the monitor. Based on utilization, monitors are used to auditing the existing system and forecast future resource needs.
- Security tools: Security must be incorporated into the system even at the ground level because fog components directly communicate with raw data sources. As wireless networks are often used for all communication, encryption is a need. In some circumstances, end users ask the fog nodes directly for data. User and access control, therefore, form an element of fog computing's security initiatives.
- Resource manager: Independent nodes that make up fog computing must operate in synchronization. The resource manager arranges data transfer between nodes and the cloud and allocates and reallocates resources to different nodes. Moreover, it handles data backup, guaranteeing that no data is lost. High availability is essential since fog components consume part of the cloud's SLA obligations. To identify when and where there is a strong demand, the resource manager collaborates with the monitor. This makes sure that neither the fog servers nor the data are redundant.
- Applications: Apps provide users with genuine services. To deliver high-quality service while maintaining cost-effectiveness, they make use of the data that the fog computing system provides. It is crucial to remember that these parts need to be controlled by an abstraction layer that provides a common interface and a set of communication protocols. Typically, APIs and other online services are used to do this.
How Does Fog Computing Work?
Local devices (fog nodes or edge devices), which are situated nearer to data sources and have greater storage and processing capacities, are used in fog computing. When compared to central processing, these nodes can process data significantly more quickly. Raw data is collected by IoT (Internet of Things) beacons and transferred to a fog node nearby where it is locally filtered and processed before being sent to the cloud for long-term storage. Edge gadgets consist of:
-
Switches
-
Cameras
-
Routers
-
Controllers
-
Embedded servers
Yet any gadget with storage, processing power, and network access may likewise operate as a fog node. These nodes are positioned in various strategic locations when there is a big and spread network to provide local analysis and access to crucial information.
The process of fog computing is as follows:
- Signals are sent from Internet of Things (IoT) devices to automation controllers, which run a control system software. Then, the equipment is automated.
- Via protocol gateways, the control system program transfers the data.
- To ensure that the data can be easily understood by internet-based applications, it is transformed into protocols like HTTP.
- For a more thorough examination, fog nodes gather the data.
- Data is filtered and stored for future use.
What are the Advantages of Using Fog Computing over Cloud Computing?
The followings are some advantages or benefits of fog computing:
- Better security is provided. The same techniques used in an IT environment are used to safeguard fog nodes.
- Instead of transmitting certain data to the cloud for processing, it does some of the processing locally. As a result, network bandwidth can be saved. Lower operational expenses are the result of this.
- It lessens the need for latency, enabling rapid judgments. This aids in preventing mishaps.
- Data from users are more privately protected since they are evaluated locally rather than sent to the cloud. The gadgets are managed and controlled by the IT staff.
- With the necessary tools and the ability to operate equipment to meet client needs, developing fog applications is simple.
- Fog nodes are naturally movable. They may thus join and quit the network whenever they choose.
- Fog nodes can resist extreme weather conditions in areas like railways, cars, the ocean, factories, etc. They can be put in remote areas.
- Fog computing reduces latency since data analysis takes place locally. Less data bandwidth and round-trip time are to blame for this.
What are the Challenges of Fog Computing?
Fog computing has the following drawbacks:
- Physical location: Fog computing negates some of the "anytime/anywhere" advantages of cloud computing since it is anchored to a specific area.
- Possible security concerns: Fog computing is vulnerable to security problems like man-in-the-middle (MitM) attacks or spoofing of Internet Protocol (IP) addresses under the correct conditions.
- Startup expenses: Fog computing is a system that makes use of both edge and cloud resources, therefore there are hardware expenditures involved.
- Ambiguous concept: Although fog computing has been around for a while, there is still considerable confusion surrounding its definition since different manufacturers define it differently.
What are the Applications and Use Cases for Fog Computing?
A smart electrical grid is one use case for fog computing. Modern electrical networks are extremely dynamic, responding to rising electricity demand by reducing output when it is not necessary to be economical. A smart grid largely depends on real-time data regarding electricity output and consumption to function successfully.
It is best to analyze the data in the remote place where it was created, therefore fog computing is perfect for this. In other cases, the data is not from a single sensor but rather from a collection of sensors, such as the electricity meters in a neighborhood. In these cases, it is preferable to process and aggregate the data locally rather than to transmit the raw data in its entirety to avoid overburdening the data transmission.
Fog computing has applications in the Internet of Things (IoT), including the next-generation smarter transportation network (V2V in the US and the Car-To-Car Consortium in Europe). The "Internet of Vehicles" promises safer transportation through improved collision avoidance with traffic that moves more smoothly. Each vehicle and traffic enforcement device is an IoT device that produces a stream of data and connects to other vehicles as well as traffic signals and the streets themselves.
Each car produces a substantial amount of data, only from its speed and direction, as well as from how hard it breaks and when it does so to other vehicles. The data must be wirelessly delivered on the 5.9 GHz frequency in the USA since it originates from moving cars; if this isn't done properly, the volume of data might potentially overflow the limited mobile capacity. Processing data at the level of the vehicle using a fog computing strategy through an onboard vehicle processing unit is a crucial part of sharing the constrained mobile bandwidth.
With the IIoT, fog computing has been used in manufacturing (Industrial Internet of Things). Instead of sending all of their data to the cloud, connected industrial machines with sensors and cameras now collect and analyze data locally. In a distributed data fog computing paradigm, processing this data locally resulted in a 98% reduction in the number of data packets transported while retaining 97% data correctness. The energy savings are perfect for efficient energy use, a critical aspect when using battery-operated gadgets.
Although fog computing is a relatively recent addition to the cloud computing paradigm, it has gained substantial traction and is well-positioned for expansion. The Fog World Congress is highlighting this trend by highlighting this developing technology.
What are the Differences Between Fog Computing and Cloud Computing?
The ideas of clouds and fog are extremely close to one another. Yet, there are several criteria where cloud computing and fog computing differ from one another. A side-by-side comparison of cloud computing and fog computing is shown below:
- A thousand kilometers away from client devices, big data centers that are centrally placed in cloud architecture are found all over the world. Fog architecture is dispersed and is made up of millions of tiny nodes that are placed as near as possible to client devices.
- Fog is closer to end users because it serves as a middleman between data centers and hardware. Without a fog layer, direct communication between the cloud and the devices takes time.
- With cloud computing, distant data centers handle the data processing. For real-time control, fog processing and storage are carried out at the network's edge, near the information source.
- In terms of computational power and storage capacity, the cloud outperforms fog.
- There are a few sizable server nodes in the cloud. Millions of tiny nodes make up fog.
- Due to its immediate reactivity, fog does short-term edge analysis, whereas the cloud focuses on long-term deep analysis due to its slower latency.
- Clouds have high latency, while fog has low latency.
- In the absence of an Internet connection, a cloud system fails. Fog computing employs a variety of protocols and standards, reducing the likelihood of failure.
- Fog has a distributed architecture, which makes it a more secure system than the cloud.
What is the Difference Between Mist and Fog Computing?
The gap between central cloud and edge computing could be bridged via mist computing. A mist device has features similar to a professional cloud server and is an improved edge computing and fog device. While offline and online usage is associated, mist and fog computing complement one another. The cutting-edge cloud made up of microcontrollers and sensors is used in mist computing. Mist computing gathers resources via cloud networks and communication tools that are available on the sensor while operating at the extreme edge.
Cloud computing and network edge computing both use fog technology. Mist computing, in contrast, sits in between cloud and edge/fog computing. Any device having processing storage and network connectivity can be a fog node in fog computing and placed on a railroad track or a gas station. Using only microcontrollers and microchips, Mist computing is lightweight computing in the network web. In contrast to Mist, where intelligence is optional, Fog brings intelligence down to the bottom of the cloud architecture.
What is the Difference Between Edge and Fog Computing?
The location of the intelligence and computing capacity is the primary distinction between fog and edge computing, according to the OpenFog Collaboration, which Cisco founded. Data is sent from endpoints to a fog gateway in a purely foggy environment, where it is sent to sources for processing before being sent back to the fog gateway and intelligence is at the local area network (LAN).
In edge computing, the endpoint or a gateway includes intelligence and power. Because each device autonomously works and chooses which data to retain locally and which data to send to a gateway or the cloud for additional analysis, edge computing proponents laud its elimination of points of failure. Fog computing is preferred over edge computing, according to proponents, since it is more scalable and provides a better overall view of the network because it receives data from several data points.
However, it should be emphasized that some network experts believe fog computing to be nothing more than the Cisco brand name for one type of edge computing.
What are the Best Practices for Fog Computing?
The implementation of a fog engine has its challenges. When administration gets simpler, businesses frequently choose a centralized strategy for their technological infrastructure. The installation of a dispersed collection of heterogeneous fog devices introduces additional compatibility and maintenance issues. Here are the top 10 best practices for fog computing.
-
Be sure you leave room for flexibility: The appeal of fog computing is in its ability to integrate various pieces of hardware and software. Things get complicated very fast if a flexible interface software isn't provided for this connection. New physical and virtual sensors must be considered while developing web-based services and APIs. The fog engine must easily interface not just with other fog nodes but also with the current cloud solution.
-
Install a fog console: All installed fog nodes must be monitored by administrators, who must decommission them as needed. This decentralized architecture is managed from a central location, removing zombie fog device vulnerabilities and maintaining order. As fog components are subject to the same regulations as cloud-based services, compliance audits are handled more easily with a powerful reporting and logging engine and a management panel.
-
Implement access restriction at the layer of the fog nodes: Users directly access cloud-based services in a standard cloud-based configuration. For this reason, every cloud vendor has a proprietary access management system that is integrated with IAM products from outside suppliers. The fog layers serve as a go-between in fog computing, connecting the user and the cloud. As a result, the same authorization procedure and rules apply here, and the fog engine must be aware of the identity of the person making the service request.
-
Put up the necessary security procedures and tools: Security is one of the main problems with fog computing since it is more complicated in a decentralized, local environment. Fog security is only the first stage after user authentication. As the transfer mechanism is predominantly wireless, every data transmission must be encrypted. Application service requests need the validation of application signatures. Sensitive user data is subject to compliance laws, even when it is only temporarily retained. Another element that offers an additional degree of protection is user behavior profiling.
-
Make sure the hardware and software are minimal: It's crucial to choose the appropriate hardware and software for each sensor. At the fog level, it may be tempting to over-engineer and add complex devices, but the goal is to have a small hardware and software footprint. Anything more will lead to a pricey middle-level computation that might compromise security. It is crucial to carefully analyze the function of each sensor and the accompanying fog node. Each fog component's lifespan may be automated and managed from the main panel.
-
Include danger identification and mitigation at the level of the fog: The greatest security procedure that can be used is to detect threats at the fog level even before they reach the primary cloud infrastructure. The security part of the fog engine has to be tweaked to detect irregularities in user and application behavior. It is simple to ignore vulnerabilities related to particular hardware or software since there are so many different parts involved. A defined procedure and timeline must be followed when applying any security updates or patches.
-
Use appropriate load-balancing methods: Reduced latency and increased network traffic are two of fog computing's main benefits. If the fog nodes themselves are not adequately monitored and load-balanced, this cannot be accomplished. Here, it's important to prevent overloading or underloading the fog nodes. Load-balanced fog layers can improve a variety of Quality of Service (QoS) metrics, including resource utilization, throughput, performance, response time, cost, and energy consumption.
-
Depending on your needs, select a storage choice: Depending on the kind of sensors that the organization supports, different sensor levels have different storage choices. Rotating disks are suitable for large media libraries, whereas local flash chips perform well for security keys, log files, and tables. A data server is necessary for anything that demands a lot of in-memory storage, but this must be completely avoided in fog architecture. The price of storage per GB must be taken into account while selecting hardware.
-
Think about energy efficiency: A certain amount of ignored additional energy usage might soon result from the increasing number of hardware. To preserve energy efficiency, appropriate measures must be put in place, including ambient cooling, low-power silicon, and selective power-down modes.
-
Plan for continuous fog services: Fog nodes must function independently of the main computer system and one another. To prevent the loss of the entire service due to the failure of a single node, the system must be built for high availability. According to the kind and function of the fog node, specific data backup plans must be put into place and repeated often.
What is the History of Fog Computing?
Ginny Nichols, a product line manager for Cisco, first used the phrase "fog computing" in 2014. Fog is a term used to denote low-lying clouds, as we know from meteorology. This computing approach is known as "fog" since it concentrates on the edge of the network. With the popularity of fog computing, IBM created the term edge computing to describe a related computing technique.
Cisco, Microsoft, Dell, Intel, Arm, and Princeton University collaborated to develop the OpenFog Collaboration. General Electric (GE), Foxconn Technology Group, and Hitachi are more companies that participated in the partnership. The consortium's main goals were to promote and standardize fog computing. In 2019, the Industrial Internet Collaboration (IIC) and the OpenFog Consortium (OFC) combined.