Re: PSK Fun from 2016
toggle quoted messageShow quoted text
The very same thing happens in TV when we try to deliver a perfect picture to viewers. When the BER is 0.0 we are good. If it climbs we have a problem somewhere along the transmission path and it is then time to go on the hunt to find the culprit. There is a tremendous amount of circuits that a DTV signal travels through between source and home flat screen. It isn't propagation that gets us because everything is fiber fed however any equipment that handles 1's and 0's will at times need a refresher course. Sometimes the fix could be as simple as a 30 second power reset, software upgrade or occasionally replacement. In any case it doesn't take long for the viewer to take notice and then the phone begins to ring.
Such is the digital world but it is still fun when you make a hobby out of it.
On Sunday, January 1, 2017 4:40 PM, "Stephen Melachrinos w3hf@... " <070@...> wrote:
Now you've opened another can of worms!!!
Technically, there already is a correlation for the most basic form of PSK31. The underlying modulation scheme is really DPSK--Differential Phase Shift Keying. The "differential" part effectively encodes the information bit based on whether the phase changes (differs) from bit to bit or stays the same. For example, if the phase stays the same from one bit to the next, then the transmitted (next) bit is a one. If the phase shifts 180 degrees at the bit transition, then the transmitted bit is a zero. This means that an error in a single bit will actually trigger two consecutive bits to be in error (since the "reference" for the second bit is wrong). But this is typically "taken care of" in the analysis by adjusting the BER for the given signal-to-noise ratio, making it a bit worse than BPSK.
(The reason for using DPSK instead of classic binary PSK is that it is self-calibrating. The receiver never needs to know the EXACT starting phase, it only needs to do a bit-by-bit comparison. This simplifies greatly the receiver design, and also makes the receiver much faster to synchronize after a dropout.)
It gets more complicated when error-correcting codes are used, like in QPSK31. With these codes, there's a complex interaction between "nearby" bits, where "nearby" differs depending on the code type and complexity. (I'll skip a more detailed discussion of block codes vs convolutional codes vs others.) But error-correcting codes are still used because the gain outweighs the loss of these extra errors, at least for classic AWGN channels. (That's Additive White Gaussian Noise, typical of a non-fading channel where background noise is simple static.) Some channels, though, are not AWGN. For those there are additional considerations. If the channel is subject to fading, you can use interleaving--rearranging the order in which bits are sent so that the fade only damages a few bits in any given decoder block. This works well only if the overall message is long compared to the fade duration, and it introduces a long delay in processing both the transmission and reception.
The bottom line is that communications engineers try to address all of these other factors and reduce the analysis to a net BER. If you can do that, then my earlier discussion holds when you use the adjusted net BER.
Now, to consider what happens if bit errors are correlated (that is, getting a bit wrong is correlated with an error on an adjacent bit) which might complicate things a bit. Pun utterly unintended. Well, ok, maybe just a bit.