There is no Theoretical Limit for Noise Reduction in Digital Communications

Shannon entropy is a basic characteristic of communications from energetic point of view. Entropy has been expressed here as a function of signal to noise ratio, and lower bound for entropy has been investigated. We prove that finite nonzero bound does not exist, therefore in case of M-QAM modulation, there is no theoretical limit for reduction of the effect of noise. In our investigations, averaging is considered, exploiting the zero expected value of the Gaussian noise.


Introduction
In this year, we celebrate the 73rd anniversary of the Shannon theory (Shannon, 1948). An essential tool of the theory is the Shannon entropy (Shannon, 1949): where is the probability of the successful communication of the i th message, ld is the logarithm of base 2. For binary messages, From thermodynamics, entropy is a measure of disorder. At complete order, entropy is zero, at complete disorder, entropy has a maximum. Suppose that this idea can be extended for the theory of communications as well. That means, when we try to decrease effect of noise, then we try to decrease entropy. At minimum effect of noise, entropy has a lower bound. So, our goal is to find the lower bound of entropy.
In Figure 1 we can see (2). The curve of the entropy is almost a quadratic function. We can see that complete order is at p=0 and p=1, complete disorder is at p=0.5. Really, success of communications is surely detected at p=0 and p=1, and the uncertainty is the highest at p=0.5. Figure 1 has been provided using the following Matlab program: p=0:0.001:1; plot(p,-p.*log2(p)-(1-p).*log2(1-p)) hold on plot(p,1-4.*(p-.5).^2,'r') In Section II, has been expressed as a function of signal to noise ratio. In Section III, it has been proved that has no extremum, as a function of the detection threshold. In Section IV, it is proved based on the previous Section, that there is no theoretical limit for reduction of the effect of noise. Throughout of our investigations, averaging is considered.

Entropy i)
Thermodynamical entropy (Lieb & Yngvason, 1999) The concept of entropy was introduced by Clausius (Clausius, 1879). Goal was to distinguish natural processes that are not possible or irreversible, despite the law of energy conservation is not hurt. Second main law of thermodynamics says that in an isolated system, total entropy cannot decrease. Thus, entropy is a measure of disorder.
Clausius approach is macroscopic, complemented by Boltzmann's microscopic approach (Laranjeiras, Lucena, & Chiappin, 2020). The formula for Boltzmann entropy says that entropy is linearly proportional to the logarithm of number of microscopic states in a system, i. e. = ln where k is the Boltzmann constant and is the number of states. This approach is indispensable in determining energy distribution of different elementary particles. One of them, the Fermi-Dirac statistics (Dirac, 1926), is a part of the basis of semiconductor physics.

ii)
Gibbs entropy (Jaynes, 1965) The Gibbs entropy formula differs from that of Boltzmann, = ∑ ln where k is the Boltzmann constant and are probabilities of system fluctuations. It is shown that there are neglections in the Boltzmann formula that cannot be hold, and the correct formula is Gibbs's one.

iii)
von Neumann entropy (Mackey, 2013) The von Neumann entropy is an extension of Gibbs entropy for the case of quantum mechanics: = ( ln ), where is the density matrix (Wikipedia: Density matrix), ln is the natural logarithm (of base e), and () is trace (sum of diagonal elements).

iv)
Shannon entropy (Shannon & Weaver, 1949) Shannon entropy is essentially the extension of Gibbs entropy from particles to messages in information transfer: = − ∑ ld where ld is the logarithm of base 2 and is the probability of the successful transfer of the i th message.
It is an interesting question if entropy is in connection with the amount of transferred information. If self-information (Wikipedia: Information content) is defined as = − ∑ ld then entropy is the expected self-information = ∑ . Now, if the probabilities are identical, then we return to the Boltzmann entropy formula, namely, if = , So, the Boltzmann entropy of a series of messages equals to the expected self-information.
An expression for Shannon entropy From (2) we have the following ideA.1. i) The probability of a successful message is in close connection with the bit error rate (BER). ii) BER can be expressed as a function of the signal to noise ratio (SNR) for any specified types of digital modulation (Proakis, 2001). iii) Then entropy can be expressed as a function of the detection threshold L, see please a few lines below. iv) Then extremum of entropy can be investigated. This idea has been motivated by the problem if successful communication of bits is really limited by the condition when power density of the signal is identical to that of the noise. We suspect that the answer is negative, therefore we introduce the detection threshold 0 ≤ ≤ 1 and say, that the successful communication is limited by:

= *
( 3) where and are the signal and noise power densities, respectively. That means if > . then the message is detected successfully, otherwise not. Our plan is to express ( ) and try to determine the value of corresponding to the minimum of at a given .
i) Denote g and b the number of good and bad detection of bits, respectively. Then By combining (4) and (5), From (2) and (6), ii) Expression for BER as a function of SNR is found in (Proakis, 2001): where is bit energy, is noise power density and 4QAM modulation was assumed. Bit energy is the same as signal power density (Appendix III), so (8) can be written as follows: where PDS and PDN are the signal and noise power densities, respectively. In this paper, we define the signal to noise ratio SNR as = This definition, instead of = , where S and N are the signal and noise power, respectively, is based on our modification of the Shannon formula (Ladvánszky, 2020).
Assume that averaging is applied for our signal that decreases PDN and leaves PDS intact: Thus from (9), = √ is rewritten as iii) Applying (7) and (12), ( ) has been obtained, for a given .

iv)
Now it is to investigate if ( ) has a finite nonzero minimum.
has no extremum : In Figure 2, we plotted the logarithmic derivative of ( ). This program has been repeated four times with different values of .
Meaning of Figure 2 is the following.
L can be arbitrarily small. That is, effect of noise can be decreased arbitrarily by averaging. That is done by repeating the message in noise and taking the arithmetic mean of the detected signals. This can be done without any bounds.
Analytical proofs have been found in Appendix I and II.

Conclusions
We expected that by finding the extremum of ( ), a specific value can be found that is the bound of the averaging procedure. Surprisingly, we found that can be arbitrarily small.
The result is significant. When trying to decrease the effect of noise by averaging, repetition can be done arbitrary times, without limit. This is reasonable, because increasing the number of averaging, effect of noise is further reduced. Thus, Equation (3) says that there is no theoretical limit for reduction of the effect of noise.
This result is in harmony with the similar result from the theory of coding. BER can be arbitrarily reduced by applying sufficient redundance.
Note that we speak about lack of theoretical limit. Of course, practical limit is more rigorous. Otherwise signal to noise ratio could be made arbitrarily high, and there were no upper bound for the speed of information transfer over a noisy channel of finite bandwidth.
Although our calculations are for 4QAM, the same can be done for any kind of digital modulation. That is the explanation of the title.
Finally, we should emphasize that the term noise reduction does not mean here physical reduction of noise, but only that of its effect on communications.
One of the reviewers required more comparison with existing results. The author is not aware of any such study (Shannon, 1948;1949;Proakis, 2001;Littlejohn & Foss, 2009). The nearest idea is, as we have already mentioned, proof of existence of an error correcting code if the BER is lower than the theoretical upper limit (0.5) (Shannon, 1948;1949). Here we pointed out that theoretically we can achieve the same without coding. Thus, averaging and coding may result the same, as both exploit redundancy.

Appendixes
Appendix I. Analytical proof of the strict monotonicity of ( ) for 4QAM Strict positivity of ( ) should be proved. We achieve it in the following way: for any values. Therefore ( ) has no extremum, it is strictly monotonically increasing. That is, we wanted to prove.
Appendix II. Extension for QAM of arbitrary order (A.1.6) is an approximation. That should be replaced with the exact formula that is valid for QAM of arbitrary order (Meghdadi, 2008;Wireless Pi, 2019).
Let us start with the exact formula for M-ary QAM symbol error rate (Meghdadi, 2005;Equation 15): = 1 − 1 −  Appendix III. Bit energy and signal power density One of the reviewers asked a reference for the statement that bit energy is the same as signal power density. As we are not aware of a reference for this statement, in this Appendix we explain it.
Sketch of the calculation is that we determine bit energy and average signal power density of a periodic, rectangular signal of amplitude and bitrate R that means repetition frequency of /2. Then we find where is a positive, real number. In the calculation, we assume an ideal rectangular lowpass filter following the mixer, with corner frequency R.
The exact value of does not influence the conclusion of this paper, thus, without loss of generality, = 1 is assumed.

Copyrights
Copyright for this article is retained by the author(s), with first publication rights granted to the journal.
This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).