| |July 20219the cloud and device while maintaining consistent performance. With regards to infrastructure, edge computing is a network of local micro data centers for storage and processing purposes. At the same time, the central data center oversees the proceedings and gets valuable insights into the local data processing. However, we need to be mindful that edge computing is a kind of expansion of cloud computing architecture - an optimized solution for decentralized infrastructure. The main difference between edge and cloud computing is - In cloud, the data is processed away from the source, and hence there are chances of facing bottlenecks in the data transmission, which in turn leads to latency. Whereas, in edge computing, the processing occurs closer to the data source. This reduces latency in data transmission and computation, thereby enhancing agility. While conversations about the advantages of edge computing are exciting, to drive real value from it an organization needs to begin with identifying the pain-points it addresses. The ultimate purpose of edge computing is to bring compute, storage, and network services closer to endpoints and end users to improve overall application performance. Based on this knowledge, IT architects must identify and document instances where edge computing can address existing network performance problems. How does edge computing work?In traditional enterprise computing, data is produced at a user's computer. That data is moved across a WAN such as the internet, through the corporate LAN, where the data is stored and worked upon by an enterprise application. Results of that work are then conveyed back to the end-user. However, if we consider the number of devices that are connected to a company's server, also the volume of data it generates, it is far too much for a traditional IT infrastructure to accommodate. So, IT architects have shifted focus from the central data center to the logical edge of the infrastructure -- taking storage and computing resources from the data center and moving those resources to the point where the data is generated. The idea is very simple if we can't get the data closer to the data center, get the data center closer to the data. Why edge computing is gaining popularity?There are several reasons for the growing adoption of edge computing:· Due to emerging technologies such as IoT and IoB, the data is being generated in real-time. Devices enabled by these technologies require a high response time and considerable bandwidth for proper operation. · Cloud computing is centralized. Transmitting and processing massive quantities of raw data puts a significant load on the network's bandwidth. · The incessant movement of large quantities of data back and forth is beyond reasonable cost-effectiveness and leads to latency · Processing data at the source and then sending valuable data to the center, is a more efficient solution.As organisations are increasingly moving back to remote working models, we will witness wide adoption of edge computing as it empowers remote work infrastructure with greater computation and storage capabilities. When millions of end-user devices operating across geographic locations are connected to a central data center, it puts tremendous strain on the IT infrastructure. In such a scenario, edge computing has emerged as the viable architecture as it supports distributed computing to deploy compute and storage resources closer to the data source. Through this, it not only enables seamless decentralization of IT but also eliminates data congestion and latency issues. It allows enterprises to deploy local storage to collect and protect raw data, while local servers perform the essential analytics to enable faster decision making before sending the result to the central data center. In edge computing, the processing occurs closer to the data source. This reduces latency in data transmission and computation, thereby enhancing agility
<
Page 8 |
Page 10 >