To decrease latency and push data processing closer to the point of data collection, new models of edge computing have emerged.
There’s a lot of buzz surrounding the Internet of Things and its impact on industry and governmental operations. More recently, we’ve seen an uptick of decentralized approaches to response and deployment in areas like public safety and transportation. This increase in activity has pushed innovation in how smart devices in decentralized networks transmit and process data to gain actionable intelligence in and out of a field of operation.
A continuing issue with time-sensitive applications of IoT technologies is the amount of time needed to send data between devices operating at the “edge” of a network and centralized processing clouds that take the data, analyze it, and provide intelligence on it. Even a few seconds can be a few seconds too many when talking about areas like public safety, first response, or traffic management.
To decrease latency and push data processing closer to the point of data collection, new models of edge computing have emerged. Here we’ll talk about edge computing as a concept and a closely related method of processing called “fog” computing. These two methods share similar approaches to solving computational problems in IoT infrastructures related to responsiveness but differ in a few key areas.
What Problems are Fog and Edge Computing Solving?
Primarily, edge and fog computing attempt to minimize the gap between data collected, data processed, and actionable intelligence produced from the data itself.
Consider the push to use smart devices, connected to the cloud to aid in proactive first-response. The idea is that smartphones (or any device) carried by first responders can be used to gather and report information to these professionals in times of emergency, providing real-time data that allows them to act with better intelligence.
However, in emergency situations, first-responders can’t wait for data from one end of a network to flow into a centralized cloud server for processing and analysis, only to flow back out to their location.
Therefore, the primary goals of edge computing are:
- Increasing flexibility nearer to the assets in question.
- Reducing the time it takes to process and interpret data before making decisions, and
- Reducing the amount of bandwidth on a network,
Time is of the essence. So, when we talk about “edge” and “fog” computing, we are referencing two approaches to solve this latency problem to empower decision making for both organizers and professionals in the field. And in this specific case, these two approaches attempt to “push” data processing out from the center of an IoT network closer to the devices themselves.
What is Edge Computing?
To mitigate the amount of data transmitted and lower latency between data collection and processing, smart technologies include the capacity to process their own data locally. This is called “edge” computing.
Think of the network of devices or tools connected to sensors that continuously take measurements and record data. The data could be about the location of the device, the condition of the user, or even the processes of the device itself. The device then sends this data “as-is” (minus any preprocessing or encryption) back to a centralized cloud network for analysis and storage. If the central processing cloud is “the center” (or “a” center) of the network, then the smart device recording data is literally on the edge of the network.
Therefore, edge computing is moving some, or all, of the processing operations, typically done in a centralized cloud out to the devices themselves. The devices process some of the data for quick analysis and send some, or all, of the data back to a centralized cloud server for complete processing.
So, instead of merely sending back data to a cloud processor for analysis, a device with a first responder can provide on-the-spot analytics based on conditions on the ground. The data will eventually make its way back to the cloud, but the responder won’t have to wait for analysis to make a decision.
Since processing happens so close to the point of collection, it reduces time and latency between collection and analysis. It also provides the devices, or the controllers of the devices, more readily-available data on which to act without waiting for cloud servers to compute the data and return it.
What is Fog Computing?
Fog computing extends the concept of edge computing by placing more robust, cloud-like data processing and intelligence gathering technologies closer to the edge of the network. With a fog computing model, several edge nodes in a network collect data and send it to an intermediary “fog” for processing.
Instead of having a centralized cloud for processing, you might instead have a smaller “fog” (or local cloud) situated on a local area or mobile network near the devices on the edge of the response network.
Now, instead of relying on the edge devices to do all the processing, data can move through a local processing cloud that provides analysis and intelligence quickly over a larger space, providing flexible response times over a larger area (and a larger group of devices).
In the Fog and on the Edge: Computing in the IoT Market.
We can see then that these terms aren’t necessarily opposed to one another, even though they are different.
Edge computing happens at the edge of a network on devices themselves. Data is immediately accessible, processing is quick, and response times become as fast as the devices can work with the information given. And, edge computing has a high-level of applicability across several industries. Alongside the example of first-responders used here, edge computing has been utilized in projects like Amsterdam’s Smart Traffic Management project. This project seeks to use smart edge computing to help optimize traffic flow in real time. Future developments in this project also include the ability for enabled automobiles to connect to the traffic management network, pushing the processing and analysis of data further out to the edge, and into the cars and trucks on the road.
A fog cloud connected to these edge nodes, however, provides more robust controls and processing for the edge nodes while still limiting the amount of bandwidth and latency on a smart network. Edge nodes aren’t going to carry the heavy load of processing larger amounts of data, so fog clouds can pick up this slack and filter data to and from the edge nodes, and to and from a central cloud.
But more importantly, fog clouds can serve as nodes for a more extensive network that spans multiple verticals of operation that include transportation, public safety, sanitation, and energy distribution. A network of smart devices can also provide a backbone for crime prevention and response over wider areas of coverage.
As IoT methods push computing closer and closer to the edge of smart networks, concepts like edge and fog computing are going to become more mainstream terms. Smart cities like Amsterdam are already mobilizing these technologies to significant effect. Businesses that rely on extensive digital communications, transportation and fleet management or industrial process management are also going to want to better understand how these approaches to data processing can change how business is done.