“En-caching” the RAN – the AI way

 


 

 

RAN caching is an intuitive use-case for AI. Our report “AI and RAN – How fast will they run?”, places caching third in the list of top AI applications in the RAN.

There is seriously nothing new about caching. In computing analogy, caching is as old as computing itself. The reason caching and RAN are being uttered in the same breath is primarily MEC. MEC is a practical concept. MEC attempts to leverage the distributed nature of the RAN infrastructure in response to the explosion in mobile data generation and consumption.

Caching looks at practically every point in the RAN as a possible caching destination – it can be base stations, or RRHs, or BBUs, or femtocells, or macrocells and even user equipment.

The caching dilemma is a multipronged – what to cache, where to cache and how much to cache.

In an ideal world, one could have access to infinite storage and processing capacity interconnected with infinite throughput at zero latency. In the real world however, each of these aspects – storage, processing power, throughput and latency are finite and real.

If the content is moved closer to the edge, then the latency can be marginally reduced, but the pressure on the backhaul for replications and updates will be high.

Conversely, centralizing the content adds to the latency due to longer access paths.

Conventional solutions use a magical keyword – optimize.

Optimization has its comfort zones  – the traffic patterns are predictable, spatial diversity is static, the number of parameters to be considered is finite. This is no longer the case in present day networks.

One has to expect that optimization is a loaded, and flexible word. It has rather glibly placed itself in the pantheon of ‘AI-accepted’ epithets.

‘Real’ 5G expects its RAN to be dynamic beast, continually morphing in response to user behavior, device type, and network conditions. Add content temporal and social features, like views and likes to that mix. 5G RAN caching needs to be commensurately supple.

Let us sample a few of the very specific suppleness demands on caching:

  • Cached content can exist in multiple locations. Ensuring that all caches have consistent and updated data is crucial. Most data these days is mutable. Cache invalidation strategies are required to maintain data integrity.
  • Network slicing poses its own challenges - optimal caching strategies are needed for each slice. Resources should not be wasted on redundant caches
  • Cached data, being closer to the user and outside the traditionally more secure core network, can be more vulnerable to attacks.

It is in the crosshairs of these questions that AI, ML and DL provide multiple pathways of salvation.

Let us see how.

AI algorithms, trained on historical user data, can forecast which content or data a user is likely to request next. In a 5G network, content popularity can change rapidly. Neural networks, trained on vast datasets of user behavior, can predict shifts in content popularity. For instance, during a significant global event, a particular news clip might see a surge in demand. Neural networks can forecast these spikes, ensuring that such content is cached in advance, catering to the increased demand.

Not all users have the same data needs. Deep Learning, especially clustering algorithms, can group users based on their data access patterns. For example, users in a particular location might frequently access specific types of content, such as local news or regional shows. A DL model can identify these clusters and ensure that relevant content is cached closer to these user groups, enhancing their experience.

The dynamic nature of 5G RAN, with varying user densities and data demands, necessitates adaptive cache allocation. Reinforcement Learning (RL), where algorithms learn optimal strategies through interaction with the environment, can be employed. An RL agent, by continuously assessing user demands and cache hit rates, can adaptively allocate cache resources, ensuring that high-demand data is always readily available.

Let us look at convolutional neural network (CNN). CNNs are known to be inspired by the visual cortex of animals. Just like the cortex, CNNs excel at learning learn spatial hierarchies of features from input data. CNNs do this automatically, eliminating the need for manual feature engineering. As a corollary, CNNs are computationally intensive and require significant amounts of data for training. When connected in parallel, CNNs too can be used to pinpoint caching locations and registers.

Cache storage is finite. Deciding which data to retain and which to replace is crucial. Traditional caching mechanisms, like Least Recently Used (LRU), might not always be optimal for dynamic 5G environments. ML can optimize cache replacement. By analyzing patterns in data access frequencies, user mobility, and network conditions, ML algorithms can determine the most relevant data to cache, ensuring optimal utilization of cache storage.

Do you have any more ideas that you can share about AI in RAN caching? Do share with us.

 

Published on: February 25, 2024

 
Kaustubha Parkhi
Principal Analyst, Insight Research
 

 

RELATED BLOGS

What ails 5G-SA?

    Our recent report “Virtual Core – Gateway to the “Real 5G”; brought out one thing very clearly – 5G SA is clearly taking longer than anticipated.  The reasons are many – telco ennui with the constant architectural flux without commensurate returns being the main one. Telcos have had their fingers burnt with the seemingly never-ending development … Continue reading What ails 5G-SA?

Be careful with microservices!

    Of late, cloud native network functions (CNF) have been gaining steady traction. We tracked the market for CNFs in our report Containers and Telcos – Ready to Tango? The market prospects for CNFs are very bright, with a compounded annual growth rate (CAGR) in excess of 50%! 5G and its cloud-native push is … Continue reading Be careful with microservices!
The Cloud Native Pole Vault – The contrasting fates of Rakuten Mobile and Jio

The Cloud Native Pole Vault – The contrasting fates of Rakuten Mobile and Jio

      Oh no, this is not about the incredible exploits of the unbelievable “Mondo” Duplantis at Paris last week! He is on a different planet altogether. If you recall, cloud native networks and network functions has been a longstanding area of focus for Insight Research. How much of it does really translate into … Continue reading The Cloud Native Pole Vault – The contrasting fates of Rakuten Mobile and Jio

Select your currency
INR Indian rupee