[4G] UE PUSCH SNR affected by #PRBs

Hello everyone,

I wanted to raise a question, after digging into the code of the srsRAN (which provides an open source implementation of the eNB/gNB and a 4g/5g UE). I will focus on the 4G part.

I connected the srsENB process with an srsUE using a channel simulator (through GNU Radio Companion). The channel used in between is a simple AWGN, where I am able to modify the voltage of the noise.

I also modified the code of the default srsENB scheduler, so that I can regulate the number of PRBs assigned to the user for the PUSCH.

I was expecting that setting a certain noise voltage level, the reported PUSCH SNR would be the same.

However, I noticed that the estimated PUSCH SNR (that is computed in the stage of PUSCH channel estimation, before the PUSCH decoding) is affected by the number of PRBs assigned to the user. Especially, the srsUE performs some type of normalization while encoding the PUSCH signal, which is dependent on the number of PRBs.

I would like to ask, whether this is a normal operation that a 4G UE does. Also, if it does so, then the channel estimation in the eNB side, shouldn’t be able to identify the real channel conditions under which the UE transmits?

Good question. Following to see other experts answer this.
In my opinion there are two factors,

1.) SNR is a measure of both noise and the received signal. The received signal will be lower per RB as you increase the bandwidth allocated.
2.) Noise level on the electronics is usually modelled via n = kTB where k is boltzman constant and T is the temperature. B is the bandwidth which shows that the noise level increases on the circuit level as the bandwidth increases. This should degraded the SNR as well.