Previous Blogs

February 9, 2022
Samsung Raises the Bar with Ultra Versions of S22 and Tab S8

January 20, 2022
US 5G Market Just Got Much More Interesting

January 4, 2022
Qualcomm Extends Automotive Offerings with Snapdragon Ride Vision, Digital Chassis

2021 Blogs

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs

TECHnalysis Research Blog

February 24, 2022
5G Edge Computing Challenges Remain

By Bob O'Donnell

With less than a week to go until the Mobile World Congress (MWC) trade show kicks off in Barcelona, a great deal of attention is being focused on the latest developments in 5G and related technologies. Now that 5G networks are becoming increasingly common around the world, the focus is shifting away from simply faster download speeds to the new capabilities that 5G networks were supposed to enable.

Chief among these are edge computing applications. In theory, 5G-based edge computing was supposed to deliver all kinds of futuristic applications. It was the promise of these new applications that arguably triggered some of the initial excitement and hype around the latest generation mobile network standard.

But now that the time has come to turn these concepts into reality, there’s a great deal of uncertainty as to how that’s going to be achieved. Don’t get me wrong, I expect to see a huge amount of discussion and even product announcements around 5G edge computing coming from MWC. When you start to scratch below the surface of this news, however, the picture isn’t so clear.

A big part of the problem boils down to definitions—or rather, the lack of them. The “edge” has always been somewhat of a fuzzy topic, and now we’ve seen several variations on it—like “near edge” and “far edge”—that, while certainly well intentioned, only serve to confuse matters more. It’s hard to have a clear conversation about a concrete topic when the subject matter at the heart of the discussion is so amorphous.

Another confusing issue that’s starting to become more apparent is uncertainty around who, where, and how these 5G edge applications, along with the necessary computing resources, will live, and who will offer them. Some of this stems from the lack of clarity around business models of various players—carriers, cloud computing providers, network infrastructure vendors, application providers, businesses, etc.—and how they will need to interact.

To put things more bluntly, early conversations around 5G edge often seemed to suggest we were headed towards a day when computing resources (i.e., small, power-friendly servers) would be housed in nearly every cell tower and those servers would run specialized “edge” applications that could benefit from their proximity (e.g., low latency, high speed) to other devices. However, the reality of today’s Mobile Edge Computing (MEC) offerings—the phrase that many telcos and other tech players are using to describe 5G edge computing—looks to be significantly different.

For one, it’s clear now that the cost, complexity, and logistical hassle of adding computing resources to every cell tower, or even a modest percentage, will keep that from happening for a very long time. In fact, in my conversations with major US carriers, it sounds like we’re several stages away from anything like that happening soon. Instead, standard practice seems to be based around using a few geographically-spaced data centers to run edge-based applications. Interestingly, most of them aren’t housed in carrier facilities. Instead, carriers are essentially leasing space from the major cloud computing providers. In other words, much of the “edge computing” is really just public cloud computing that happens to be connected via cellular networks. Now, there’s certainly nothing wrong with this approach. In fact, it does make logical sense. Nevertheless, it does seem to be much different than what many people perceive to be “the edge.”

Another important clarification is that the carriers certainly do have large data centers of their own, and many of them play critical roles in 5G networks. However, much of that work is actually for running the network itself, using software-based technology to drive virtual RAN (radio access networks) and some early experiments in Open RAN. Part of the confusion around this topic is that many of the newest 5G compute-based offerings are really tied to this shift from a traditional network-infrastructure hardware model to a software-based virtualized one. That’s clearly a very important step forward for telcos in general and a great opportunity for tech vendors to get their products and technologies embedded into 5G networks. However, it’s architecturally still one step below actually running edge-based applications on top of this software-defined network.

Seemingly, the next logical step in making edge computing more real for carrier networks would be to extend to a wider range of cloud computing sites, to expand into their own regional data centers, or likely, some combination thereof. After that, we could see telcos expanding their computing capabilities to traditional COs (central offices) that US carriers have dispersed around the country or in other co-location sites such as those offered by companies like Equinix. There are multiple challenges to each of these scenarios, however, not the least of which is the time, money, and effort it would require to drive these expanded computing efforts. An even more fundamental problem is determining who actually offers the value in a situation with these shared resources. In other words, does an enterprise or consumer-focused business go to a carrier or a cloud computing provider to get an edge computing project done?

Yet another challenge in understanding the real status of 5G edge computing comes when you start to consider the public 5G networks and the hot new world of private 5G networks. Running edge applications on public networks is significantly more challenging on an architectural, logistical, and practical front, because, right now, it’s extremely difficult to know what applications would need to run on which portions of the public network. Ideally, of course, combining a vast amount of computing resources with the flexibility and “mobility” of containerized, cloud-native applications would let essentially any edge computing app run anywhere, but practically speaking, that’s just not possible now (nor will it likely be for some time). On top of that, while there have been a few interesting applications touted for public 5G edge computing, the truth is, the delay for many applications is due in part to the fact that companies are still trying to figure out what exactly they want to do. As mentioned earlier, given the fact that the computing resources for these public 5G edge applications aren’t really that close to the edge, it does beg the question of how or why it’s different from established cloud computing practices. (And let’s be honest, how many public network-driven applications will ever really need single millisecond latency?) The long-promised network slicing technology could offer some promise here as a dedicated path for 5G-based applications. However, given that network slicing requires 5G standalone (SA) and that we’re still years away from SA being the standard in the US, network slicing won’t be coming anytime soon either. (Only T-Mobile offers a 5G SA network, and they don’t currently offer network slicing.)

Through private 5G networks, on the other hand, the benefits of edge computing become much more obvious, and it’s here where I think the real opportunity lies—at least for the next several years. In a private network, the geographical proximity to computing resources is a non-issue, the access to applications needed to run is inherently limited because it’s only for employees of that organization, and the arguments about the theoretical benefits that we’ve long been promised for 5G edge computing all start to make sense. That’s why things like the announcement of AT&T’s new Private 5G Edge offering—done in conjunction with and leveraging Microsoft’s Azure cloud computing resources—is the kind of news that I expect to see more of over the coming year. Even here, though, the company acknowledges that it’s early days for 5G edge computing on private networks, because organizations are still trying to figure out how to best take advantage of it.

Despite all the concerns I’ve raised, I do believe embedding computing capabilities into the network and enabling 5G edge computing is a critical differentiator for the latest generation mobile networks. Exactly when those capabilities are going to be widely available, who’s going to offer them, who’s going to make money from them, and what they’re going to do, however, remain largely unanswered questions. Stronger definitions of what the various flavors of edge computing actually are would certainly help, but it’s also going to take a lot of technical experimentation, business model adjustments, and creative thinking to help fulfill the vision of what 5G edge computing can really be.

Here's a link to the original column:

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.