Probability of error versus Probability of bit error

I dont think you can even decode a bit with SINR = -20 dB.

I agree with your point, that 0.5 limit does not applies everywhere, that’s I mentioned it is for BSC and if receiver knows all about the nature of the channel. So it more of a theoretical concept.
But if we know BER is let say 0.7, then we can definitely say not all bits are in error.
That’s the meaning of probability.

So how come ber is just up to 0.5?
You have never seen a ber of 0.8?

In this scenario.
But we can’t ensure all these assumptions in practical cellular communication systems.

There’s too much theory.
We should stay close to real life situations.
And in real life ber goes from 0 to 1 not from 0 to 0.5.

But how can one understands real life situation without understanding theory? :slightly_smiling_face:

When people decide SoC they even consider 0.01 ber.
Where as Radio planner are even ok with 5~10% BER also.

I don’t know what theory you study with ber max value 0.5
If I transmit 10 bits and all 10 bist are received in error how can you claim that ber is max 0.5?

I haven’t said ber max value is 0.5 for practical systems, for a case that I mentioned where receiver can be designed to know all about the channel and binary bits are transmitted, that’s the case that I mentioned.

Your case does not cover all real life situations.
So make a theory that covers all real life situations.

Apriori probability of bit error should also be mentioned here.

I addressed whatever the doubt was asked.

I haven’t made these theories :slightly_smiling_face:. Also one theory can not cover all real life problems.

We both stated our opinion about ber range. :+1:

Indeed, good discussion. :wink:
Thanks :+1:

I agree with @RFSpecialist regarding BSC.
Because I am using LDPC decoder Belief Propagation algorithm and this algorithm used BSC case for that.
So, the process it computes probability of bit error.