Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Murgos
Oct 21, 2010
I've found the Brookings institute to have been doing a credible job attempting to discuss and describe disinformation in the media.

Here is an article that discusses in detail some recent disinformation attacks and some of the current methods: https://www.brookings.edu/techstream/how-disinformation-evolved-in-2020/

This one is a bit older but does a good job describing the scope of the problem: https://www.brookings.edu/research/how-to-combat-fake-news-and-disinformation/

Under their media and journalism page you can find lots of links to other articles (https://www.brookings.edu/topic/media-journalism/).

One of the more interesting things that I've really only see them report on is what the major social media sites are doing to combat disinformation.

Adbot
ADBOT LOVES YOU

Murgos
Oct 21, 2010

Literally Kermit posted:

Oh good, should be a quick read

It's more than you would expect actually. For example Google closed a flaw in their results that lifted one off websites with exact matches to a search term that had recently be created to the top result.

This allowed bad actors to make a fake news site (or more than one) and then disseminate a social media post with an easily remembered search term. So, people would 'do their own research' and look up the key term and be directed by Google right to the honeypot giving it an air of respectability.

That these fake sites were designed to look like real news outlets made them even more convincing.

"Look, that outrageous thing I heard is true! I searched for it and there is all kinds of reporting on it!"

Murgos
Oct 21, 2010

quote:

There is no inherent value to maintaining some sort of high "correct:incorrect ratio" with respect to media and media consumption. Someone can carefully avoid all direct falsehoods and end up believing and supporting a bunch of terrible things because they carry a set of assumptions that make them incapable of correctly interpreting the information they're exposed to.

This is provably false. The source of the error is important. It's the difference between a Type I or Type II error.

If you have all the information and the information is correct and you make an incorrect (or even substantially less efficient) decision based on that information you are faulty.

If your inputs are false and you make a decision that should be correct given the inputs you were provided your system is working correctly and the input is faulty.

This could, theoretically, be the same output in both cases but the source of the error is different.

So, you could only be correct in if there is no value in determining the source of the fault, which I think can be trivially disproved with an example of a simple redundant system.

In the first case three people given the same information where the other two are not faulty will result in a majority opinion that is correct.

In the second case all three people will be in error because their inputs were faulty.

So, the source of the error is actually very important to determining truth.

Murgos
Oct 21, 2010

Discendo Vox posted:

I am in internet outage land so I can’t post in detail but you’re absolutely correct. That model doesn’t help with truth correspondence, only correspondence between intended message and received message. It’s a very basic model and specifically not supposed to reflect all systems or phenomena, or be a way to understand the whole world; it’s just good at breaking down one aspect of it.

A better model would probably show that the encoding of the information by the transmitter is a source of noise as is the decoding performed by the receiver rather than the noise being purely introduced in the transmission medium.

Murgos
Oct 21, 2010

Jarmak posted:

That's not what noise is. Noise is extraneous signals picked up by a receiver. It's not distortion from the transmission medium, it's random nonsense that exists within the transmission medium. It exists distinct from the signal, otherwise it is not noise.

You don't think that encoding or decoding a signal can introduce noise?

edit: My frame of reference here is A/D and D/A conversion, if you are considering some ideal system then okay, but I don't think that's a useful paradigm for a conversation on errors in communication.

edit: VVVV No need, I've pulled it. I am aware that most of modern digital communications are based on work done by Shannon and others. I don't think I have ever read that specific paper but I am sure it is internally consistent for the way it's stated the problem. Note that that the authors are defining the 'transmitter' block in terms of technologies like telegraphy, telephony and early TV transmitters which all introduce noise from various sources.

quote:

A transmitter which operates on the message in some way to produce a signal suitable for transmission over the channel. In telephony this operation consists merely of changing sound pressure into a proportional electrical current. In telegraphy we have an encoding operation which produces a sequence of dots, dashes and spaces on the channel corresponding to the message. In a multiplex PCM system the different speech functions must be sampled, compressed, quantized and encoded, and finally interleaved properly to construct the signal. Vocoder systems, television and frequency modulation are other examples of complex operations applied to the message to obtain the signal.

Murgos fucked around with this message at 19:37 on Apr 30, 2021

Murgos
Oct 21, 2010

Jarmak posted:

There can be noise within an encoder but that is in context of the encoder. When you analyze a signal noise is additive interference, whether you define something as noise or not is a function of the frame of reference you're doing the analysis under. If I am doing a transmission line analysis of the internal circuitry of the encoder then yes, poo poo like induced currents creates noise in the circuit. Analyzing a transmission you're more likely to consider that part of the transform function of the encoder and classify noise as the additive interference between the transmitter and receiver.

We're working with a metaphor here though so I was assuming we're working with an ideal model. In which case it would make more sense to consider the effects of the encoder distortion/artifacts, because in the metaphor they're intended to represent distorting effects of the transformation, not interference.

To your point some of the original example in the op is blurring noise and error and so I was following that context to avoid introducing more terms.

My point (without writing a paper) restated is as follows:

The original paper uses a TV Camera as an example of a transmitter. I think it's trivial to note that a TV Camera in 1948 pointed at a stage full of actors is dropping a HUGE amount of data present in the information source that no matter how low noise the transmission medium and how high the quality of the components of the receiver will just never be observable at the destination.

A full color, 3D image was flattened and scanned at some discrete rate and then presented to the destination as a series of electrons blasted onto a phosphorescent tube.

I think this concept, that both the transmission and reception are altering the information, is important to understand when discussing how information changes between the source and the destination.

Adbot
ADBOT LOVES YOU

Murgos
Oct 21, 2010

Discendo Vox posted:

Transmission and reception effect outcomes but I think they’re not classified as noise. The noise in the example is radium’s coding. Issues of encoding and decoding reflect different error or information types.

If you just look at the model it implies that the receiver is the inverse of the transmitter and that the data at the observer is, as Jarmak notes, is the original data + noise.

Last time because I think we all understand each others PoV: The data at the observer is not just data + noise, it's Frecv(Ftrans(data) + noise))

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply