It’s an extraordinary thought that when you make a request of the Internet, it’s quite likely that it travels halfway around the world and back to you within the blink of an eye; try sending yourself an email and it will appear in your inbox virtually instantaneously with you clicking “send”, despite the fact that it may well have been to a server in Arizona and back again.
Such high-speed operation has been the foundation of the Internet, centred on massive servers gathered in certain, often geographically remote, locations. However, as applications, and our demand for them, have become increasingly data- and processing-hungry, providers are looking at ways in which they can cut the time and expense of shipping data to end users.
Edge computing is already becoming a mainstay of such efforts. Somewhat paradoxically, services on the “edge” are actually those that are closer to you, the user. Effectively, edge computing moves the processing element of applications onto your device, be that your desktop computer in your office or your child’s gaming tablet. Whereas previously all processing of your requests would have taken place on a central server, or more recently in the cloud, apps are increasingly being designed to bring the processing onto your device.
This has multiple advantages, both for users and service providers. Firstly, from a user perspective, it accelerates the performance of your applications; no longer are your requests travelling halfway around the world with associated lag. In the example above, of an email, you barely notice this lag, but with, for example, complex multiplayer game platforms, it can be a significant issue that can be dramatically mitigated by bringing processing closer to the end user.
In terms of security edge computing has significant advantages: requests made and data sent by the user travel through a much smaller loop than they would with traditional processing, and the fewer nodes and distributors required the less likelihood there is of malicious actors obtaining access. For the same reason, edge computing offers significantly greater reliability: simplistically, in a chain of three nodes, the odds of something going wrong are 75% less than in a chain of 12 nodes.
The advantages for service providers are obvious: the more processing duties they can shift away to the edge, i.e. user devices or servers close to them, the less energy, processing power and security systems, not to mention actual physical servers, they require to supply the services, and so the lower their overheads will be.
With the exponential and seemingly unstoppable growth of the Internet of Things (there are already more internet connected devices in the world than there are human beings), edge computing is fast becoming the norm for new applications. In some cases this is because it provides a speed of processing that cannot be achieved with traditional servers: autonomous driverless cars, for example, will have to process so much information every second that it would be impossible for them to communicate with the server and receive return information swiftly enough to make timely decisions. Additionally, as more and more homes, transport systems, municipal infrastructure etc become Internet linked and controlled, provision will simply be swamped unless more processing is moved on board devices and away from central servers. There is no doubt that for the future of computing, the edge is where it’s at.