Why LTE Downlink BLER must be around 10%?

Hello !

I’m wondering why BLER in eNodeB in LTE Downlink must be around 10%?

Any explanation for that?

Why is 10%, and not for instance 20%?

Thanks alot.

Admin note: this post was updated with image below.

Firstly, lets understand the concept of BLER.

It can be divided into two categories:

Initial BLER: When the eNB sends data to the UE and UE is unable to decode it, then it will send a HARQ NACK to the eNB. A NACK means that the eNB will have to retransmit the data and this NACK is considered IBLER or Initial Block Error.

Residual BLER: If the UE is unable to decode the data even after retransmission, the UE will send another NACK and the eNB will have to retransmit again. However, there is a limit to these retransmissions and usually they are configurable. Commonly, these retransmissions are set to 4 and after 4 retransmissions, the eNB will not retransmit at HARQ level and consider this as a Residual Block Error.

The BLER target is maintained by the IBLER so this means that the eNB tries to maintain an IBLER of 10% for each UE. RBLER is usually very low and it is supposed to be less than 0.5%. The question may arise that why don’t we reduce the IBLER further and make it low as that should reduce retransmissions. The problem here is that lowering IBLER means that we need to lower the MCS. Even a very low MCS will not ensure a linear decrease in IBLER but it will degrade throughput excessively. So, various simulations and field trials were done to come up with an optimum target of 10% for IBLER which is followed by most of the vendors.

Recently it has been found that BLER target of 10% works fine in fair conditions but when the radio conditions are bad or good, other BLER targets provide higher gains. For instance, if the radio conditions are bad, a BLER target of 10% keeps the MCS very conservative and increasing the BELR target, increases the MCS and it provides higher throughput gains. So, such parameters can be tuned if available to get better results.

Original article: from @AliKhalid

3 Likes

So, by increasing BLER target we can get good gain in throughput, whats the cons?
Resource be waste?

Be aware your latency, read 3GPP TR 36.912 – U-plane latency table B.2.2-2a and B.2.2-2a for comparison BLER 0% vs 10%.

At 0% BLER you will not have latency at all, latency is zero. No?
If you will not have latency so you will get higher throughput.

For example when you do ping test with ICMP packet no need high throughput but you can measure how much latency. I think not so simple like that like you said, low latency, high throughput.

Irrespective throughput that we can get depend on our CQI report converted to MCS value and will get TBS size, which there are have link adaptation with target bler for selecting MCS value, we can tune initial target bler for increasing MCS with BLER increment and get more throughput, but there are some optimal value.

Good discussion… I’m still doing this study… Hopefully can get more view from radio perspective

1 Like

If BLER target is set > 10%, enodeB tend to allocate higher MCS for UE, TBS size may be bigger but not easy for UE to detect, higher retransmission rate.
If BLER target is < 10%, enodeB tend to allocate lower MCS for UE, TBS size may be less but easy to decode, less retransmission rate.
From simulation, 10% target BLER has best performance.

Is this BLER measured at the transport block level or at the code block level?

hi dears;

where can i find the BLER traget value in the E// vendor and is the parameters in core level or BTS level ??