Skip to main content

What is edge computing?

What is edge computing?

/

The future of software will be managed

Share this story

Companies like Amazon, Microsoft, and Google have proven to us that we can trust them with our personal data. Now it’s time to reward that trust by giving them complete control over our computers, toasters, and cars.

Allow me to introduce you to “edge” computing.

Edge is a buzzword. Like “IoT” and “cloud” before it, edge means everything and nothing. But I’ve been watching some industry experts on YouTube, listening to some podcasts, and even, on occasion, reading articles on the topic. And I think I’ve come up with a useful definition and some possible applications for this buzzword technology.

What is edge computing?

In the beginning, there was One Big Computer. Then, in the Unix era, we learned how to connect to that computer using dumb (not a pejorative) terminals. Next we had personal computers, which was the first time regular people really owned the hardware that did the work.

Right now, in 2018, we’re firmly in the cloud computing era. Many of us still own personal computers, but we mostly use them to access centralized services like Dropbox, Gmail, Office 365, and Slack. Additionally, devices like Amazon Echo, Google Chromecast, and the Apple TV are powered by content and intelligence that’s in the cloud — as opposed to the DVD box set of Little House on the Prairie or CD-ROM copy of Encarta you might’ve enjoyed in the personal computing era.

Most of the new opportunities for the “cloud” lie at the “edge”

As centralized as this all sounds, the truly amazing thing about cloud computing is that a seriously large percentage of all companies in the world now rely on the infrastructure, hosting, machine learning, and compute power of a very select few cloud providers: Amazon, Microsoft, Google, and IBM.

Amazon, the largest by far of these “public cloud” providers (as opposed to the “private clouds” that companies like Apple, Facebook, and Dropbox host themselves) had 47 percent of the market in 2017.

The advent of edge computing as a buzzword you should perhaps pay attention to is the realization by these companies that there isn’t much growth left in the cloud space. Almost everything that can be centralized has been centralized. Most of the new opportunities for the “cloud” lie at the “edge.”

So, what is edge?

The word edge in this context means literal geographic distribution. Edge computing is computing that’s done at or near the source of the data, instead of relying on the cloud at one of a dozen data centers to do all the work. It doesn’t mean the cloud will disappear. It means the cloud is coming to you.

That said, let’s get out of the word definition game and try to examine what people mean practically when they extoll edge computing.

Latency

One great driver for edge computing is the speed of light. If a Computer A needs to ask Computer B, half a globe away, before it can do anything, the user of Computer A perceives this delay as latency. The brief moments after you click a link before your web browser starts to actually show anything is in large part due to the speed of light. Multiplayer video games implement numerous elaborate techniques to mitigate true and perceived delay between you shooting at someone and you knowing, for certain, that you missed.

Edge computing has privacy benefits, but they aren’t guaranteed

Voice assistants typically need to resolve your requests in the cloud, and the roundtrip time can be very noticeable. Your Echo has to process your speech, send a compressed representation of it to the cloud, the cloud has to uncompress that representation and process it — which might involve pinging another API somewhere, maybe to figure out the weather, and adding more speed of light-bound delay — and then the cloud sends your Echo the answer, and finally you can learn that today you should expect a high of 85 and a low of 42, so definitely give up on dressing appropriately for the weather.

So, a recent rumor that Amazon is working on its own AI chips for Alexa should come as no surprise. The more processing Amazon can do on your local Echo device, the less your Echo has to rely on the cloud. It means you get quicker replies, Amazon’s server costs are less expensive, and conceivably, if enough of the work is done locally you could end up with more privacy — if Amazon is feeling magnanimous.

Privacy and security

It might be weird to think of it this way, but the security and privacy features of an iPhone are well accepted as an example of edge computing. Simply by doing encryption and storing biometric information on the device, Apple offloads a ton of security concerns from the centralized cloud to its diasporic users’ devices.

But the other reason this feels like edge computing to me, not personal computing, is because while the compute work is distributed, the definition of the compute work is managed centrally. You didn’t have to cobble together the hardware, software, and security best practices to keep your iPhone secure. You just paid $999 at the cellphone store and trained it to recognize your face.

The management aspect of edge computing is hugely important for security. Think of how much pain and suffering consumers have experienced with poorly managed Internet of Things devices.

As @SwiftOnSecurity famously said:

That’s why Microsoft is working on Azure Sphere, which is a managed Linux OS, a certified microcontroller, and a cloud service. The idea is that your toaster should be as difficult to hack, and as centrally updated and managed, as your Xbox.

I have no idea if the industry will embrace Microsoft’s specific solution to the IoT security problem, but it seems an easy guess that most of the hardware you buy a few years from now will have its software updated automatically and security managed centrally. Because otherwise your toaster and dishwasher will join a botnet and ruin your life.

If you doubt me, just look at the success Google, Microsoft, and Mozilla have had in moving browsers to an “evergreen” model.

Think about it: you could probably tell me which version of Windows you’re running. But do you know which version of Chrome you have? Edge computing will be more like Chrome, less like Windows.

Bandwidth

Security isn’t the only way that edge computing will help solve the problems IoT introduced. The other hot example I see mentioned a lot by edge proponents is the bandwidth savings enabled by edge computing.

For instance, if you buy one security camera, you can probably stream all of its footage to the cloud. If you buy a dozen security cameras, you have a bandwidth problem. But if the cameras are smart enough to only save the “important” footage and discard the rest, your internet pipes are saved.

Almost any technology that’s applicable to the latency problem is applicable to the bandwidth problem. Running AI on a user’s device instead of all in the cloud seems to be a huge focus for Apple and Google right now.

Companies will control even more of your life experiences than they do right now

But Google is also working hard at making even websites more edge-y. Progressive Web Apps typically have offline-first functionality. That means you can open a “website” on your phone without an internet connection, do some work, save your changes locally, and only sync up with the cloud when it’s convenient.

Google also is getting smarter at combining local AI features for the purpose of privacy and bandwidth savings. For instance, Google Clips keeps all your data local by default and does its magical AI inference locally. It doesn’t work very well at its stated purpose of capturing cool moments from your life. But, conceptually, it’s quintessential edge computing.

All of the above

Self-driving cars are, as far as I’m aware, the ultimate example of edge computing. Due to latency, privacy, and bandwidth, you can’t feed all the numerous sensors of a self-driving car up to the cloud and wait for a response. Your trip can’t survive that kind of latency, and even if it could, the cellular network is too inconsistent to rely on it for this kind of work.

But cars also represent a full shift away from user responsibility for the software they run on their devices. A self-driving car almost has to be managed centrally. It needs to get updates from the manufacturer automatically, it needs to send processed data back to the cloud to improve the algorithm, and the nightmare scenario of a self-driving car botnet makes the toaster and dishwasher botnet we’ve been worried about look like a Disney movie.

What are we giving up?

I have some fears about edge computing that are hard to articulate, and possibly unfounded, so I won’t dive into them completely.

But the big picture is that the companies who do it the best will control even more of your life experiences than they do right now.

When the devices in your home and garage are managed by Google Amazon Microsoft Apple, you don’t have to worry about security. You don’t have to worry about updates. You don’t have to worry about functionality. You don’t have to worry about capabilities. You’ll just take what you’re given and use it the best you can.

In this worst-case world, you wake up in the morning and ask Alexa Siri Cortana Assistant what features your corporate overlords have pushed to your toaster, dishwasher, car, and phone overnight. In the personal computer era you would “install” software. In the edge computing era, you’ll only use it.

It’s up to the big companies to decide how much control they want to gain over their users’ lives. But, it might also be up to us users to decide if there’s another way to build the future. Yes, it’s kind of a relief to take your hands off the steering wheel and let Larry Page guide you. But what if you don’t like where he’s going?