Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

  1. Blog
  2. Article

Eduardo Aguilar Pelaez
on 2 April 2020

Edge AI in a 5G world – part 2: Why make cell towers smart?


This is part of a blog series on the impact that 5G and GPUs at the edge will have on the roll out of new AI solutions. You can read the other posts here.

Recap

In part 1 we talked about the industrial applications and benefits that 5G and fast compute at the edge will bring to AI products. In this part we will go deeper into how you can benefit from this new opportunity.

Photo by NASA

Embedded compute vs Cost

Decades of Moore’s Law have given us smartphones at a price we’re willing to pay but IoT devices need to be much cheaper than that. Adding today’s fastest CPUs or GPUs to IoT devices costs a significant amount which put a hard limit on what the market is currently willing to buy at scale.

The IoT devices that are currently on the market are usually underpowered and have limited connectivity. With 5G connectivity and shared compute resources at the Edge these constrained devices will soon be able to do much more.

For instance, adding a GPU to each IoT device for the purposes of AI model inference would mean a significant increase in the hardware bill of materials. This cost would be passed onto the consumer and because it is more expensive would drastically reduce the target audience. Instead, 5G allows for heavy computation to be offloaded to nearby shared GPUs and get a response with minimal latency.

We will dive into this approach in the next section.

AI training & ML operations

Creating a new AI product has two engineering aspects to it, namely; 

  1. Model training and
  2. Inference

Model training refers to the machine learning that is usually done with ‘labelled data’ or simulations. This has big data and compute requirements.

Once the model has been trained, the implementation and operations of the inference is where much of the complexity appears. This is where we will focus most on this post, and in particular on real-time AI solutions.

During this blog series we will keep these two in mind given that the input data of today needs to be kept for it to be used as the training data of tomorrow. 

To illustrate this further in the next blog we will do a gap analysis of the technical requirements for model training, AI operations, as well as new techniques available to overcome these.


Related posts


Serdar Vural
28 October 2024

Canonical at India Mobile Congress 2024 – a retrospective

AI OpenStack

With an ambition to become Asia’s technology hub for telecommunications in the 5G/6G era, India hosts the annual India Mobile Congress (IMC) in Pragati Maidan, New Delhi. IMC is an annual trade exhibition for the telecommunication sector, bringing together operators, system integrators, as well as software and hardware vendors. It has now ...


Cedric Gegout
30 September 2024

The waiting game is over. 5G is coming to the edge.

Telecommunications Article

Canonical and Omdia’s report reveals that 96% of CSPs will launch commercial 5G edge computing within 2 years. In this blog, we examine the drivers behind this optimism. ...


Serdar Vural
26 September 2024

5G mobile networks: A driver for edge computing

5G Article

Recently, a striking report published by Omdia and Canonical highlighted that 86% of communication service providers (CSPs) are optimistic about the future of edge computing on telco networks.  This is a market that is expected to grow substantially in the coming years, with our report shedding light on the motivation that CPSs are drawin ...