Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #1   Report Post  
Old March 21st 10, 09:49 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Mar 2010
Posts: 1
Default Noise Prediction

Hi,

All the communication equations and formulae today I know of (eg. the
Shannon-Hartley Theorem), give limits on data transmission given
certain signal and noise power levels.

Most models assume that the data received is the sum of the original
signal and Gaussian noise. More advanced models assume a transfer
function is applied to the signal to simulate multipath, and other
radio phenomena.

My question is that since in many cases at least part of the noise is
not entirely unpredictable, it seems like if it could be predicted,
then it could be subtracted from the received signal, therefore not
counting as noise as far as the Shannon-Hartley Theorem goes,
therefore allowing a higher channel throughput when all other
conditions are the same.

Examples of "predictable" interference would be EMI from other man-
made devices, such as oscillators in power supplies.

My idea for doing this would be to receive a given signal (assumed
digital), demodulate it and apply error correction to obtain the
original data. Next, re-encode and modulate the data just like the
transmitter did. At this point, the receiver has a precise copy of
the data transmitted. Next apply a transfer function which simulates
the channel (this part would have to be self-tuning to minimise
error). Now the receiver has a copy of the data as it would have
been
received if there were no external noise sources (but including the
effects of signal reflection and fading, which would be included in
the transfer function).

Next, the receiver could subtract the "ideal" received data from the
actual received data, obtaining the noise received. Of this noise,
some is predictable, and some is truly random (assume true Gaussian).
This data could then be Fourier transformed, time-shifted, and
inverse
Fourier transformed to obtain a prediction of noise, which could then
be subtracted from the incoming signal for the next piece of received
data.

Similar ideas could be used for removing unwanted signals. For
example, imagine two people are transmitting on the same channel. If
you know what type of modulation and error correction they are both
using, it seems feasible that one signal could be decoded, subtracted
from the incoming signal, leaving enough information about a weaker
signal to decode that as well. If neither signal can be decoded
"first" (ie. when treating the other signal as random noise), then I
guess using linear equations to represent the data streams, it is
still possible to decode them as long as the sum of signal data
bandwidths is less than the channel capacity.

Would any of the above sound vaguely plausible? Has it been done
before? How much of real-world noise is "predictable"? How complex
would my noise prediction models need to be to get any real benefit?
Is this the kind of thing I could investigate with a software defined
radio and lots of MATLAB?

Thanks
Oliver Mattos
Imperial College London Undergraduate

(Cross posted from sci.physics.electromag, I can't seem to find
directly relevant groups)
  #2   Report Post  
Old March 22nd 10, 01:23 AM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: May 2007
Posts: 182
Default Noise Prediction

Hi Oliver,

There are essentially two types of noise or interference to signals. Random
or white noise is the product of the cosmic microwave background picked up
by an antenna, atmospheric interference and also thermal effects on circuit
components. This noise cannot be predicted by any currently known method
because it is truely random.

Interference from man made or some natural sources can be regular and it is
possible to partially or almost totally negate the effects in some cases.

The signal to noise ratio of a signal can be improved by increasing the
power of a transmission, reducing the bandwidth of a transmission
(effectively increasing the power because the signal is confined in a
narrower bandwidth), slowing the data rate of the transmission, repeating
certain elements of the transmission or implementing error correcting
routines to correct for errors in a received signal. Any or all of the above
methods can be combined to give a robust receiving system with the proviso
that the signals to be received are not completely swamped by random noise.

Good examples of these methods in use can be seen in reception methods used
around 136kHz where very narrow bandwidths, slow transmission speeds and
strong error correction allows reception of signals that are not audible to
the human ear. That said, the human brain does have a remarkable capacity to
pick signals from random noise due to our inate ability to try and pick
patterns out of everything we see or hear. A skilled operator can decode a
single weak morse code signal in the presence of heavy atmospheric noise and
interfering signals from several other stations near the same frequency.

I think a lot of what you are asking about has already been done to a pretty
high level and you may well be able to build upon the current methods and
come up with some improvements. What you will not be able to do is
reconstruct a signal that is not there because it has been totally drowned
out in random cosmic, atmospheric or thermal noise.

There is no antidote to true randomness. It's like adding infinity to
infinity, you just get infinity (and that could be more or less than you
started with depending on the maths you are using).

Have a look at some of the freeware SDR decoders that are around for digital
receivers and perhaps purchase one of the cheap Softrock receivers that
Waters and Stanton sell for around £20 to carry out some experiments. With
the right software and a decent audio card, these receivers can rival the
performance of the top end commercial offerings although only on a single
band. With some real world experience, you will soon find out what you are
up against. Not an easy project, but a very rewarding field of study.

Mike g0uli

"Oliver Mattos" wrote in message
...
Hi,

All the communication equations and formulae today I know of (eg. the
Shannon-Hartley Theorem), give limits on data transmission given
certain signal and noise power levels.

Most models assume that the data received is the sum of the original
signal and Gaussian noise. More advanced models assume a transfer
function is applied to the signal to simulate multipath, and other
radio phenomena.

My question is that since in many cases at least part of the noise is
not entirely unpredictable, it seems like if it could be predicted,
then it could be subtracted from the received signal, therefore not
counting as noise as far as the Shannon-Hartley Theorem goes,
therefore allowing a higher channel throughput when all other
conditions are the same.

Examples of "predictable" interference would be EMI from other man-
made devices, such as oscillators in power supplies.

My idea for doing this would be to receive a given signal (assumed
digital), demodulate it and apply error correction to obtain the
original data. Next, re-encode and modulate the data just like the
transmitter did. At this point, the receiver has a precise copy of
the data transmitted. Next apply a transfer function which simulates
the channel (this part would have to be self-tuning to minimise
error). Now the receiver has a copy of the data as it would have
been
received if there were no external noise sources (but including the
effects of signal reflection and fading, which would be included in
the transfer function).

Next, the receiver could subtract the "ideal" received data from the
actual received data, obtaining the noise received. Of this noise,
some is predictable, and some is truly random (assume true Gaussian).
This data could then be Fourier transformed, time-shifted, and
inverse
Fourier transformed to obtain a prediction of noise, which could then
be subtracted from the incoming signal for the next piece of received
data.

Similar ideas could be used for removing unwanted signals. For
example, imagine two people are transmitting on the same channel. If
you know what type of modulation and error correction they are both
using, it seems feasible that one signal could be decoded, subtracted
from the incoming signal, leaving enough information about a weaker
signal to decode that as well. If neither signal can be decoded
"first" (ie. when treating the other signal as random noise), then I
guess using linear equations to represent the data streams, it is
still possible to decode them as long as the sum of signal data
bandwidths is less than the channel capacity.

Would any of the above sound vaguely plausible? Has it been done
before? How much of real-world noise is "predictable"? How complex
would my noise prediction models need to be to get any real benefit?
Is this the kind of thing I could investigate with a software defined
radio and lots of MATLAB?

Thanks
Oliver Mattos
Imperial College London Undergraduate

(Cross posted from sci.physics.electromag, I can't seem to find
directly relevant groups)


  #3   Report Post  
Old March 23rd 10, 06:24 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 263
Default Noise Prediction

On Mar 21, 4:49*pm, Oliver Mattos wrote:
Hi,

All the communication equations and formulae today I know of (eg. the
Shannon-Hartley Theorem), give limits on data transmission given
certain signal and noise power levels.

Most models assume that the data received is the sum of the original
signal and Gaussian noise. *More advanced models assume a transfer
function is applied to the signal to simulate multipath, and other
radio phenomena.

My question is that since in many cases at least part of the noise is
not entirely unpredictable, it seems like if it could be predicted,
then it could be subtracted from the received signal, therefore not
counting as noise as far as the Shannon-Hartley Theorem goes,
therefore allowing a higher channel throughput when all other
conditions are the same.

Examples of "predictable" interference would be EMI from other man-
made devices, such as oscillators in power supplies.

My idea for doing this would be to receive a given signal (assumed
digital), demodulate it and apply error correction to obtain the
original data. *Next, re-encode and modulate the data just like the
transmitter did. *At this point, the receiver has a precise copy of
the data transmitted. *Next apply a transfer function which simulates
the channel (this part would have to be self-tuning to minimise
error). *Now the receiver has a copy of the data as it would have
been
received if there were no external noise sources (but including the
effects of signal reflection and fading, which would be included in
the transfer function).

Next, the receiver could subtract the "ideal" received data from the
actual received data, obtaining the noise received. *Of this noise,
some is predictable, and some is truly random (assume true Gaussian).
This data could then be Fourier transformed, time-shifted, and
inverse
Fourier transformed to obtain a prediction of noise, which could then
be subtracted from the incoming signal for the next piece of received
data.

Similar ideas could be used for removing unwanted signals. *For
example, imagine two people are transmitting on the same channel. *If
you know what type of modulation and error correction they are both
using, it seems feasible that one signal could be decoded, subtracted
from the incoming signal, leaving enough information about a weaker
signal to decode that as well. *If neither signal can be decoded
"first" (ie. when treating the other signal as random noise), then I
guess using linear equations to represent the data streams, it is
still possible to decode them as long as the sum of signal data
bandwidths is less than the channel capacity.

Would any of the above sound vaguely plausible? *Has it been done
before? *How much of real-world noise is "predictable"? How complex
would my noise prediction models need to be to get any real benefit?
Is this the kind of thing I could investigate with a software defined
radio and lots of MATLAB?

Thanks
Oliver Mattos
Imperial College London Undergraduate

(Cross posted from sci.physics.electromag, I can't seem to find
directly relevant groups)


Many types of noise certainly are predictable and real-world noise
blankers do take this into account. They are not usually designed
academically but to deal with specific issues.

e.g. Ham radios made in the 70's and 80's often had truly excellent
noise blankers to prevent the Russian Woodpecker from blowing our ears
off. I know you are talking about "subtracting" but for the common HF
modes, blanking can be incredibly effective too.

I think that the emphasis on good blanking has died off since the 80's
as the Woodpecker became less troubleseome then disappeared.

There are modern noise-rejecting "smart speakers" that auto-notch
heterodyne whistles and some regular emission noise patterns etc.

The VLF experts often use a separate "noise antenna" which is phased,
scaled, and then subtracted off the received signal.

Tim N3QE
Reply
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
True prediction on SW -- is BPL next? Frank Dresser Shortwave 17 February 7th 06 09:09 PM
My New Year's Prediction kk4tl Shortwave 0 January 1st 06 08:06 PM
n8wwm prediction Twistedhed CB 61 December 10th 04 10:25 AM
Propagation prediction Mike Terry Shortwave 0 November 6th 04 06:58 PM
prediction come true I Am Not George CB 0 April 18th 04 08:02 PM


All times are GMT +1. The time now is 06:28 PM.

Powered by vBulletin® Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright ©2004-2025 RadioBanter.
The comments are property of their posters.
 

About Us

"It's about Radio"

 

Copyright © 2017