I wanted to raise a question, after digging into the code of the srsRAN (which provides an open source implementation of the eNB/gNB and a 4g/5g UE). I will focus on the 4G part.
I connected the srsENB process with an srsUE using a channel simulator (through GNU Radio Companion). The channel used in between is a simple AWGN, where I am able to modify the voltage of the noise.
I also modified the code of the default srsENB scheduler, so that I can regulate the number of PRBs assigned to the user for the PUSCH.
I was expecting that setting a certain noise voltage level, the reported PUSCH SNR would be the same.
However, I noticed that the estimated PUSCH SNR (that is computed in the stage of PUSCH channel estimation, before the PUSCH decoding) is affected by the number of PRBs assigned to the user. Especially, the srsUE performs some type of normalization while encoding the PUSCH signal, which is dependent on the number of PRBs.
I would like to ask, whether this is a normal operation that a 4G UE does. Also, if it does so, then the channel estimation in the eNB side, shouldn’t be able to identify the real channel conditions under which the UE transmits?