How PRB utilization is impacted by inactivity timer change?

Hello Experts.

Can someone suggest How PRB utilization is impacted by inactivity timer change?


If you reduce inactivity timer, it leads to reduction in PRB utilization.

I was thinking that UE which are released are already not consuming any data.

They are inactive, that’s why they are released.

Yes, but they consume few resources to keep link alive like TAC, measurements etc.

Thanks! Can we quantify it somehow?

If inactivity timer is reduced then PRB utilisation will decrease.
However it will increase number of RRC conn requests.

RRC conn increase also make PRB consumption increase.

Of course, RRC attempt will surely increase RRC SR% improved as well but what could be Best optimized value if PRB usage are high above >90%.

In my network, current value is 20s, should be reduce to 10s or 5s.

You better split traffic between layers.

Most networks have inactivity timer 10 seconds, but I have also seen networks with 3-5 seconds.

RRC connection requests increase dramatically, one time I reduced it from 5s to 2s (had too much RRC users congestion with no possible license expansion).

DRX was not active back then, some users complained of batteries drainage (not confirmed though).

Every action is with pro and cons.

In order to have lower PRB utilisation it is better to split traffic between cells, layers by handover margins, reselection parameters, CIOs etc.

I find weird when people say there is congestion on a cell but in the same time the other 3-4 layers of same sector are with low traffic.

It is all abot sharing traffic between layers in a proper way.

1 Like

Yes, and also verified that Antenna RET which is very important and should be properly set at correct value.

For example High Layer like L18 RET must be 2 to 3 degree more Down-Tilt than Low Band L800.

1 Like

RF tuning should ensured, definitely.

Mentioning the RET, i have faced one case recently which was on improper configuration. Crossing and swapping.

For example, when i downtilt 2100 band the results observed 1800.

Is this pure Wireless issue? Cannot be detected somehow in a better way.

Swapped will likely identify by Drive Test result like PCI plots yes another way traffic shift as well but only with Aggresive action.

That means 2100 it carries less traffic and so 1800 will pick-up the delta of the traffic.

That what we have been doing.
That is crazy especially in urban scenarios.


It depends on change from value to value.
If you downtilt not aggressively will sure improve Indoor coverage/quality for dense area.
Field test is the best way to measure value between Band.

RRC user congestion occurs on connected users not attempts, right?

Yes, exactly. My point is that RRC requests rapidly increased following the timer reduction.
Which make sense in terms of signaling.

If high frequency layer is more downtilted than low frequency than coverage footprint imbalance will be there.

High frequency layer is having ret value less than ret of low frequency as higher frequencies having low coverage footprint as compared to lower frequency.


  • 2300-ret 6
  • 1800-ret 8
  • 850-ret 10