Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Epinephrine
Nov 7, 2008
So, taking a step back, here's more or less all the points of contention thusfar about the propaganda model. I organize this as an ordered list of questions about the model and its application:

1) Is the model falsifiable?
Yes: continue
No: The model is unfalsifiable and has no value to this thread

2) Can the model be used to determine the veracity of reporting and make deductions about claims (be it on a specific case level or in aggregate)?
Yes: continue
No: The model cannot be used to evaluate the veracity of reporting, not even in aggregate, and has no value to this thread given the purpose of this thread.

3) Does it have the capacity described in (2) on the level of specific cases or in aggregate?
Specific cases: We have specific cases of Herman using the model to reject individual reports from Western media on genocide. Continue.
In aggregate: Herman rejects, in aggregate, Western media reporting about various genocides. Not specific articles or reports, all the reporting is wrong and corrupted. Continue.

4) Is Herman using his own model when he does this?
Yes: continue
No: Herman wrote a book about how Western media is propaganda, then somehow never applied his model when claiming Western media was propaganda to reject Western media reporting. This conclusion is absurd and no amount of argumentation can convince otherwise. Go to yes.

5) Is Herman using or misusing his own model?
Using: The model gives an intellectual framework to justify genocide denial.
Misusing: The model is so hard to use correctly that its own creators can't use it right. In which case, how can we expect goons to use it right?

This gives us the following possible outcomes:
A) The model is unfalsifiable and has no value to this thread.
B) The model cannot be used to evaluate the veracity of reporting, not even in aggregate, and has no value to this thread given the purpose of this thread.
C) The model gives us an intellectual framework to justify genocide denial.
D) The model is so hard to use correctly that its own creators can't use it right. In which case, how can we expect goons to use it right?

None of these outcomes gives credence to the propaganda model. All outcomes lead to rejection.

Adbot
ADBOT LOVES YOU

Raenir Salazar
Nov 5, 2010

College Slice
I'd like to repeat my earlier question with a bit of variation as well since it didn't get much attention and maybe it got lost, but for indisputably or largely justified US-led interventions like the Korean War, WW2, Gulf War 1, and in stopping the British-French invasion of the Sinai to control the Suez canal; would the PM confirm these as good interventions?

Also I would like to give my thanks to the person who linked the AI driven dungeon text game thing, I'm having fun with it!

sean10mm
Jun 29, 2005

It's a Mad, Mad, Mad, MAD-2R World
Why would the US government-media establishment manufacture/exaggerate/etc. the Rwandan genocide when uh they never showed any serious interest in loving doing anything there? What was even the hypothetical point to making up an atrocity to just ignore it? Wasn't the US policy ultimately "blow it off and let everyone die"?

None of these post-WW2 genocide denials are a good look but that one always seemed particularly nonsensical to me.

:shrug:

piL
Sep 20, 2007
(__|\\\\)
Taco Defender

Epinephrine posted:

So, taking a step back, here's more or less all the points of contention thusfar about the propaganda model. I organize this as an ordered list of questions about the model and its application:

1) Is the model falsifiable?
Yes: continue
No: The model is unfalsifiable and has no value to this thread


What? I've lost track of the thread because I don't care about the PM and I'm not going to read multiple books to find out more about it and make informed judgments about its implementation. Is it really a point of agreement that only falsifiable models are of value to a thread on media analysis and communication? Shannon-Weaver, as applied in post two of this thread is done so in a manner that would be unfalsifiable. It makes no predictive claims first of all, but to use it to make predictive claims about media intent and interpretation (vice signal accuracy) would require you to narrow a question so greatly as to be absurd.

There are entire swaths of questions a person could try and should try to ask about media that are by their very nature unfalsifiable without very rigorous and narrow definitions of all of the terms that would greatly reduce practical value.

  • Is this article well written?
  • Is this source trustworthy?
  • Is this article true?
  • What types of sources are trustworthy?
  • What are some ways to notice that I am being manipulated by media?
  • What rhetorical techniques should be considered appropriate in a particular format and which should cause doubt in the reader?
  • Does this collection of articles on a subject represent sufficiently diverse range of opinions to ensure that I'm well-versed on the arguments?
  • Is this an appropriate type of media to make and support this claim?
  • Should I spend $10 to access this media?
  • Does the funding source of a content generator affect the trustworthiness of the generated content?
  • Should I trust this content funded by this source?
  • Is this clickbait?
  • What is the author's intent?
  • How did the publication of a particular piece of media affect a particular situation?

All of these seem like appropriate discussion points for this thread and none of them have any place in any falsifiable model without defining very restrictive terms. Prescriptive models that address these questions could be generated or referenced and could be of value to this thread. They would by necessity be unfalsifiable and would be inappropriate for establishing claims of causal relationships or making prediction.

Sekhem
Feb 13, 2009

Epinephrine posted:

No: Herman wrote a book about how Western media is propaganda, then somehow never applied his model when claiming Western media was propaganda to reject Western media reporting. This conclusion is absurd and no amount of argumentation can convince otherwise. Go to yes.
But your previous premises here already give you indication why this isn't absurd, or at least an inaccurate framing of what's happening. If the PM isn't used to establish the empirical veracity of particular stories, then it directly can't be used as the basis of verifying the supposed facts on the ground. And since the statistical aggregate of reporting isn't presented as an indication of inaccuracy of bare facts - narratives can be realistic in terms of the bare facts but still propagandistic in emphasis, attention and framing, that's the whole point of the worthy / unworthy victims distinction - then that can't be what he's using to determine the facts of the events either.

Herman's writings on Rwanda aren't simply a rejection of media reports and subsequent speculation, they're rejections of the available scholarly analysis and historical data. He doesn't start by using media critique to come to a speculative conclusion about events, he uses a crude political analysis of historical sources and scholarship to do so. There's no media analysis being done when Herman relies selectively on particular first hand accounts against those compiled in the conventional scholarship on the subject, it's just the same poor and biased political analysis you can encounter all the time outside the field of media studies.

The order of operations here seems pretty clear to me, it's using politically biased scholarship to establish a narrative of the empirical events and then used this as a benchmark through which media analysis can be conducted. I don't see any real indication that it's the media analysis that has the causal role here. If this order is reversed, then why would speculation about events based on media critique lead the author to establish conclusions about the reality of particular events when the PM presents propagandistic media narratives as an independent variable to the reality of events? Why would the PM lead him to the conclusion of empirical falsehood in some cases but not others? Clearly there has to be an additional process of establishing empirical veracity to make those distinctions.

e: I feel like a lot of this barrier to communication hinges on the assumption that propaganda necessarily implies falsehood, so if a user of the PM is making claims of falsehood then that must be what they're using to do so. But that's not the case, propaganda is conducted through narrative framing, presentation and emphasis, not just the distortion of facts. Even the most unambiguously totalitarian states would leverage real and verifiable events as propaganda when it suited them.

If the conclusions are simply borne from media analysis then I don't think there's anything to indicate that in the text, because there's an accompanying empirical argument (no matter how poor) of the events themselves. If you think those are just a postfacto way to hide the true motivations, then sure I guess that's possible, but that's just a speculation which seems actually unfalsifiable and unprovable.

Sekhem fucked around with this message at 07:47 on Jun 25, 2021

:rolleyes:
Apr 2, 2002
The amazing thing about this thread is how hard you and several other posters are trying to defend a model written by a guy who denied multiple genocides because he thought the media was biased in very specific pro-Western ways - on the grounds that yes, he may have denied some genocides, but actually the model is very good at identifying pro-Western media bias.

Then, when this is pointed out,

Red and Black posted:

But all this is an attempt to further derail this thread and endlessly re-litigate everything ever written by Chomsky and Herman, bleating "genocide denial" instead of engaging with the arguments being made and the evidence offered wrt the PM.

Imagine being a leftist and writing "bleating "genocide denial"" in a sentence. There used to be libertarians on these forums that were run off for defending far less terrible poo poo than this, if only because they were smart enough or ignorant enough never to bring up Murray Rothbard in public.

You don't have to defend this just because Chomsky wrote the foreword. "When someone denies multiple genocides in a single book, that person is not allowed to be taken seriously about current events anymore" is a bright line rule, not a guideline.

(USER WAS PUT ON PROBATION FOR THIS POST)

(USER WAS PUT ON PROBATION FOR THIS POST)

Sekhem
Feb 13, 2009
I don't even particularly like Chomsky and am barely familiar with Herman's independent works. My posts have had some pretty critical things to say about both of them. Part of what I'm interested in is how this model - despite what some detractors have claimed, but I think their objections were pretty thoroughly dismissed in previous pages - is taken pretty seriously academically. That would be a very curious, and concerning, fact if genocide denial or related colossal failures of reasoning were implicit or endemic to this methodology.

Particularly what leads to me being skeptical of this is that MC is, as far as I remember, a very transparent and direct work. It makes its theses and methodology very clear, in a way that's pretty easy to isolate from anything else the authors might have written. If such faults were so endemic I think they would be very identifiable in the text itself, but the critiques and rejections here seem to be doing basically anything but engaging with its model of media analysis directly. Which is what this thread should be for, I think.

Sekhem fucked around with this message at 15:52 on Jun 25, 2021

Sharks Eat Bear
Dec 25, 2004

Sekhem posted:

But your previous premises here already give you indication why this isn't absurd, or at least an inaccurate framing of what's happening. If the PM isn't used to establish the empirical veracity of particular stories, then it directly can't be used as the basis of verifying the supposed facts on the ground. And since the statistical aggregate of reporting isn't presented as an indication of inaccuracy of bare facts - narratives can be realistic in terms of the bare facts but still propagandistic in emphasis, attention and framing, that's the whole point of the worthy / unworthy victims distinction - then that can't be what he's using to determine the facts of the events either.

Fwiw I'm a "neutral" observer to this debate -- I know virtually nothing about Chomsky, Herman, the book MC, the PM -- and have found compelling arguments on both sides of this debate.

I haven't read any argument ITT that seriously challenges Sekhem's post here. From what I can gather from this thread, it would need to be shown that the PM is intended to be used to verify the validity of individual events or pieces of reporting, in order to prove that Chomsky/Herman's genocide denial claims were the outcome of using the PM. Otherwise, it seems fairly clear that the methodology of the PM is distinct from the methodology of Herman's individual work, and if the methodologies are different then I don't see how the latter delegitimizes the former.

I'll also make a somewhat superficial point that the title of the book is Manufacturing Consent, not Manufacturing Truth. I recognize that a book's title can be contradicted by its content, but I have a suspicion that this word choice was deliberate and indicative of the intent of the model.

Lib and let die
Aug 26, 2004

Sharks Eat Bear posted:

I'll also make a somewhat superficial point that the title of the book is Manufacturing Consent, not Manufacturing Truth. I recognize that a book's title can be contradicted by its content, but I have a suspicion that this word choice was deliberate and indicative of the intent of the model.

I bought a fairly recent edition a few years back, it's bugging the hell out of me that I can't find it right now to take a picture of the cover, but to tail onto this, in the edition I've got a cutout of the letter T superimposed over the S in "Consent" - I don't know if this was a publisher choice, an editor choice, or Chomsky's decision, but it seems like it'd mean something.

karthun
Nov 16, 2006

I forgot to post my food for USPOL Thanksgiving but that's okay too!

Lib and let die posted:

I bought a fairly recent edition a few years back, it's bugging the hell out of me that I can't find it right now to take a picture of the cover, but to tail onto this, in the edition I've got a cutout of the letter T superimposed over the S in "Consent" - I don't know if this was a publisher choice, an editor choice, or Chomsky's decision, but it seems like it'd mean something.

This cover?

Slow News Day
Jul 4, 2007

Sharks Eat Bear posted:

I haven't read any argument ITT that seriously challenges Sekhem's post here. From what I can gather from this thread, it would need to be shown that the PM is intended to be used to verify the validity of individual events or pieces of reporting, in order to prove that Chomsky/Herman's genocide denial claims were the outcome of using the PM. Otherwise, it seems fairly clear that the methodology of the PM is distinct from the methodology of Herman's individual work, and if the methodologies are different then I don't see how the latter delegitimizes the former.

I think we're going in circles.

If you cannot use PM to validate individual events or pieces and help guide us to the truth, then what use is it in D&D? I'm not talking about the way it is almost exclusively misused as a tool to dismiss individual articles or claims one disagrees with, in favor of alternative or fringe sources they agree with. I'm talking about its correct usage, as discussed. Because we saw that the broad claims the model makes with regards to Western media having pro-Western biases, and the mechanisms of those biases:

a) are not that interesting or insightful or even useful as actual analysis of media coverage (quoting evilweasel, "who cares?")
b) cannot be used to discredit and dismiss individual articles or claims (because that is a misuse of the model)

Sekhem
Feb 13, 2009

Slow News Day posted:

If you cannot use PM to validate individual events or pieces and help guide us to the truth, then what use is it in D&D?
You generally can't use any model of media analysis to establish the empirical facts of events. The PM is not unique in this regard. Media analysis can be very effective at pointing out deficiencies in a particular presentation of the facts, but it's alway going to require supplementation of some other method of research in order to cover for those inadequacies.

Usually that's a demand greater than the capacity any random forums poster has. So what we can do is use media analysis to help guide us to the truth by giving us the tools to read critically, be cognisant of the absences and framing present in media products.

The PM makes very specific contributions to that, by talking about how and in which direction those absences and framing are likely to be directed.

I don't find the "who cares?" argument particularly convincing because it has rarely been argued for that it has no useful contributions to make, this is just asserted. Evilweasel invokes other models of understanding bias he thought are more explanatory - "man bites dog," "if it bleeds it leads" etc - but pretty much none of these would explain the disparity of reporting in the cases that MC discusses. Mass killings and political violence that are underreported or indifferently framed in mainstream media sources don't bleed any less or are any less novel than those which receive greater care and attention.

evilweasel
Aug 24, 2002

Sekhem posted:

You generally can't use any model of media analysis to establish the empirical facts of events. The PM is not unique in this regard. Media analysis can be very effective at pointing out deficiencies in a particular presentation of the facts, but it's alway going to require supplementation of some other method of research in order to cover for those inadequacies.

The issue is the Propaganda Model cannot be used to point out deficiencies in a particular presentation of facts, if you are trying to use it in the broader sense that seeks to seperate it from the corresponding genocide denial. It instead seeks to claim a bias in the media's overall presentation of facts, not any particular article or set of articles.

Either it is useful in reviewing the information on a specific event - such as a genocide, in which case the model's authors repeatedly denying genocides is directly on point - or it's not, in which case the model is not useful "at pointing out deficiencies in a particular presentation of the facts."

The people promoting it have specifically sought to distance its use from the genocide denial, so I will accept their view of it as meaningless when applied to a perticular presentation of facts; but that means it's squarely in "so what" territory.

Sekhem posted:

Evilweasel invokes other models of understanding bias he thought are more explanatory - "man bites dog," "if it bleeds it leads" etc - but pretty much none of these would explain the disparity of reporting in the cases that MC discusses. Mass killings and political violence that are underreported or indifferently framed in mainstream media sources don't bleed any less or are any less novel than those which receive greater care and attention.


You misunderstand my point. Those are more significant biases with respect to shaping what is or is not deemed newsworthy, and as a result of those significant biases existing any reasonable media consumer already should be aware that what is reported on does not match 1:1 with what happened. The Propaganda Model adds nothing to how you should review the media as a whole besides what you should have already learned (not assuming that because something was not reported, it did not happen). So in other words, "so what" - the conclusion if the model is true is what I'd already be doing, so why bother resolving if his claims are true or not.

Epinephrine
Nov 7, 2008

Sekhem posted:

But your previous premises here already give you indication why this isn't absurd, or at least an inaccurate framing of what's happening. If the PM isn't used to establish the empirical veracity of particular stories, then it directly can't be used as the basis of verifying the supposed facts on the ground. And since the statistical aggregate of reporting isn't presented as an indication of inaccuracy of bare facts - narratives can be realistic in terms of the bare facts but still propagandistic in emphasis, attention and framing, that's the whole point of the worthy / unworthy victims distinction - then that can't be what he's using to determine the facts of the events either.

Herman's writings on Rwanda aren't simply a rejection of media reports and subsequent speculation, they're rejections of the available scholarly analysis and historical data. He doesn't start by using media critique to come to a speculative conclusion about events, he uses a crude political analysis of historical sources and scholarship to do so. There's no media analysis being done when Herman relies selectively on particular first hand accounts against those compiled in the conventional scholarship on the subject, it's just the same poor and biased political analysis you can encounter all the time outside the field of media studies.

The order of operations here seems pretty clear to me, it's using politically biased scholarship to establish a narrative of the empirical events and then used this as a benchmark through which media analysis can be conducted. I don't see any real indication that it's the media analysis that has the causal role here. If this order is reversed, then why would speculation about events based on media critique lead the author to establish conclusions about the reality of particular events when the PM presents propagandistic media narratives as an independent variable to the reality of events? Why would the PM lead him to the conclusion of empirical falsehood in some cases but not others? Clearly there has to be an additional process of establishing empirical veracity to make those distinctions.

e: I feel like a lot of this barrier to communication hinges on the assumption that propaganda necessarily implies falsehood, so if a user of the PM is making claims of falsehood then that must be what they're using to do so. But that's not the case, propaganda is conducted through narrative framing, presentation and emphasis, not just the distortion of facts. Even the most unambiguously totalitarian states would leverage real and verifiable events as propaganda when it suited them.

If the conclusions are simply borne from media analysis then I don't think there's anything to indicate that in the text, because there's an accompanying empirical argument (no matter how poor) of the events themselves. If you think those are just a postfacto way to hide the true motivations, then sure I guess that's possible, but that's just a speculation which seems actually unfalsifiable and unprovable.
Think of my last post as a logical flowchart. If we agree that the PM can't be used to make predictions about the accuracy of reporting, be it individual pieces or aggregate predictions, then the model doesn't really belong in this thread as a tool to assess media and Herman's use or misuse of the model doesn't matter. If it does, then how the model has been used or misused matters quite a lot.

To that end, I agree there's a clear disagreement between us about the order of operations in Herman's head. You seem to be suggesting that Herman first decided what facts about Rwanda were true or false and then used that conclusion as the basis to argue that all the Western reporting was wrong. My take is that he started with the notion that Western media serves as a propaganda system that effectively works for the US, which then led him to reject all the reporting on Rwanda, creating the space for him to insert these fringe beliefs. The common thread tying together all of these genocide denials by Herman is the rejection of widely-reported and established facts, on the basis that Western media serves as a propaganda system for the US.

Sekhem
Feb 13, 2009

evilweasel posted:

It instead seeks to claim a bias in the media's overall presentation of facts, not any particular article or set of articles.
Indications of directions of bias in an aggregate sense inform the pragmatic judgements we make about what biases are likely to present in a particular article.

When reading an article critically, we're going to have to ask questions about what it might be leaving out, whether this amount of information seems proportional to the significants events, the particular framing and conceptual terms being used. The overall judgements we can make about the aggregate direction of biases present in a media institution help us answer these questions.

No framework that focuses on a single article in a void can help you with that, because there's no universal heuristic we can use to determine answers isolated from a broader context. The answers we might come to in response to these questions is always going to be context dependent, so we need to base our judgements on broader aggregate behaviour to understand its place in that context.

evilweasel posted:

Those are more significant biases with respect to shaping what is or is not deemed newsworthy, and as a result of those significant biases existing any reasonable media consumer already should be aware that what is reported on does not match 1:1 with what happened.
My point is that they're not more significant biases with respect to shaping what is deemed newsworthy, at least in the particular fields we're talking about. I made a clear argument for why I think that, and you just seem to be restating your premise again.

That there even can be disagreement on this undermines your point - yes, ideally every media consumer consumer recognises that bias exists, but that fact alone does not mean we identify and correct for the same biases as each other in our different readings. The PM is not the trivial observation that biases in narrative framing, attention etc. exist, it's the argument that the specific forms of bias it identifies exist. You can certainly argue against those, but it's a very direct hypothesis being presented, not something you can drill down to a useless single sentence truism.

Sekhem
Feb 13, 2009

Epinephrine posted:

Think of my last post as a logical flowchart. If we agree that the PM can't be used to make predictions about the accuracy of reporting, be it individual pieces or aggregate predictions, then the model doesn't really belong in this thread as a tool to assess media and Herman's use or misuse of the model doesn't matter. If it does, then how the model has been used or misused matters quite a lot.
You seem to be implying that the only function of media analysis is useful in is determining the empirical fact of events. Not only do I think this is not the only function of media analysis, I don't actually think it's the function of media analysis at all - that's the work of scholars and reporters, not media consumers. We're not bound by the rigorous standards these demand, and we don't have the time or resources available to follow those standards with every new political event anyway.

I think a useful framing of the PM is that it understands the processes of bias as "filters" that data is processed through. It doesn't provide us with the methods for verifying the bare data that forms the inputs, but I don't think any media analysis framework will. We're always going to need to invoke political, economic, historical analysis in order to do that. What media analysis does help us do is recover the input data from the filters that impact its presentation. We correct for the distortions that over/underemphasis, narrative framing etc. present, in order to recover a presentation of facts that's as neutral and value-free as we can manage.

Epinephrine posted:

My take is that he started with the notion that Western media serves as a propaganda system that effectively works for the US, which then led him to reject all the reporting on Rwanda, creating the space for him to insert these fringe beliefs. The common thread tying together all of these genocide denials by Herman is the rejection of widely-reported and established facts, on the basis that Western media serves as a propaganda system for the US.
I'm glad that you've recognised that we're working from different orders of operations, because that was a point I was trying to get across. But my objection to your presentation of the order of operations is that it offers absolutely no explanatory value for a number of questions. Primarily: why did this happen in the judgement of Rwanda but not other cases? Again, as I've reported out repeatedly, the propaganda model doesn't imply falsehood - the "worthy victims" concept in MC points to examples of events that are presented as propagandistically leveraged but factually accurate. So... why were these not rejected as fabrications in a similar fashion?

If the PM identifies propagandistic leverage as orthogonal to factual veracity, possible to be present in both factually accurate or factually inaccurate cases - how can it have the prior position in this order of operations? If there's nothing else to go on, then making that decision within the model seems as arbitrary as a coinflip.

Sharks Eat Bear
Dec 25, 2004

Epinephrine posted:

Think of my last post as a logical flowchart. If we agree that the PM can't be used to make predictions about the accuracy of reporting, be it individual pieces or aggregate predictions, then the model doesn't really belong in this thread as a tool to assess media

Wrong, media analysis & criticism is a broader topic than "predicting the accuracy of reporting". There was a recent post by piL that covers this better than I can, quoted in part below.

Honestly the critics of the PM would be better served critiquing the model itself rather than trying to argue it's inappropriate or irrelevant to this thread. I think the debate over the past couple pages has been the most interesting part of this thread, which in itself is proof (to me) that it would be stupid to scope it out of this discussion.

piL posted:

What? I've lost track of the thread because I don't care about the PM and I'm not going to read multiple books to find out more about it and make informed judgments about its implementation. Is it really a point of agreement that only falsifiable models are of value to a thread on media analysis and communication? Shannon-Weaver, as applied in post two of this thread is done so in a manner that would be unfalsifiable. It makes no predictive claims first of all, but to use it to make predictive claims about media intent and interpretation (vice signal accuracy) would require you to narrow a question so greatly as to be absurd.

Slow News Day
Jul 4, 2007

Sekhem posted:

Indications of directions of bias in an aggregate sense inform the pragmatic judgements we make about what biases are likely to present in a particular article.

When reading an article critically, we're going to have to ask questions about what it might be leaving out, whether this amount of information seems proportional to the significants events, the particular framing and conceptual terms being used. The overall judgements we can make about the aggregate direction of biases present in a media institution help us answer these questions.

No framework that focuses on a single article in a void can help you with that, because there's no universal heuristic we can use to determine answers isolated from a broader context. The answers we might come to in response to these questions is always going to be context dependent, so we need to base our judgements on broader aggregate behaviour to understand its place in that context.

This reads like an attempt to have your cake and eat it too: when people (including those defending PM) point out that PM is very bad at predicting or explaining individual media coverage, you respond with "well, of course: that is not its proper or intended usage", but then when they say "okay then why is it useful in D&D where people regularly use it to try to refute media outlets they don't like", you respond with "well, indications of bias in an aggregate sense inform our judgments about likely biases in a particular article." These statements contradict each other. And if they somehow don't, then that proves the fundamental criticism leveled against the Propaganda Model: you can use it to try to justify and explain any mainstream media behavior: it becomes a hole into which you try to jam pegs of various shapes and sizes, except the hole in this case is fluid and can change size and shape to accommodate any peg.

Here, let's use a specific example. This article was posted earlier in the Immigration thread: https://www.bbc.com/news/world-us-canada-57561760.amp

What does the Propaganda Model tell us in terms of "indications of directions of bias in an aggregate sense that can inform the pragmatic judgments we make about what biases are likely to exist" in this article?

Epinephrine
Nov 7, 2008

Sekhem posted:

You seem to be implying that the only function of media analysis is useful in is determining the empirical fact of events.

Sharks Eat Bear posted:

Wrong, media analysis & criticism is a broader topic than "predicting the accuracy of reporting".
The reason why this thread is important is because we want to look at articles and make good inferences about what happened based upon what was reported and we need a rigorous framework to do that well. Building a framework to make better inferences is the purpose of this thread.

quote:

I'm glad that you've recognised that we're working from different orders of operations, because that was a point I was trying to get across. But my objection to your presentation of the order of operations is that it offers absolutely no explanatory value for a number of questions. Primarily: why did this happen in the judgement of Rwanda but not other cases? Again, as I've reported out repeatedly, the propaganda model doesn't imply falsehood - the "worthy victims" concept in MC points to examples of events that are presented as propagandistically leveraged but factually accurate. So... why were these not rejected as fabrications in a similar fashion?
So that's the thing: It did happen in other cases, in fact it's what Herman does in every case. In every case, he rejects Western reporting on the basis of it being propaganda and uses that space to insert fringe ideas. I harp on Rwanda because it's such a clear example of the point, but he plays the same game in his discussions of Bosnia, Kosovo, and Darfur.

Sharks Eat Bear
Dec 25, 2004

Epinephrine posted:

The reason why this thread is important is because we want to look at articles and make good inferences about what happened based upon what was reported and we need a rigorous framework to do that well. Building a framework to make better inferences is the purpose of this thread.

From the OP

quote:

This thread is intended for goons to cooperatively improve their ability to navigate the fraught modern media landscape; assisting one another separate fact from editorial, guiding each other to quality information, and teach each other to avoid the pitfalls of confirmation bias.

I can see how the PM could help navigate the modern media landscape as well as avoid pitfalls of confirmation bias. Maybe it doesn't help separate fact from editorial; so what?

From what I've learned about the PM in this thread, I don't think the PM is a perfect model or the only model that should be discussed, but that's very different than saying it's irrelevant or inappropriate to discuss ITT.

Thorn Wishes Talon
Oct 18, 2014

by Fluffdaddy

(and can't post for 10 days!)

Sharks Eat Bear posted:

I can see how the PM could help navigate the modern media landscape as well as avoid pitfalls of confirmation bias.

But it can, and in fact frequently does lead to, the exact opposite: it validates and reinforces existing confirmation biases. Dismissing those instances as "well those people (including the authors!) are just misusing it" is not helpful because it's not like it comes with an instruction manual or is honest and forthcoming about its own shortcomings. And a model that is just as easily misused as it is properly used is not going to be reliable or useful in a debate and discourse setting.

The bigger issue for D&D is that when the model is misused, there's no immediate or obvious way to refute it, because the misusage tends to be self-justifying, i.e. it empowers motivated reasoning and working backwards from conclusions (e.g. Western mainstream media pushes imperialist propaganda, therefore this article WaPo published about the Afghan government being likely to fall within 6 months after US withdrawal cannot be trusted).

Thorn Wishes Talon fucked around with this message at 21:54 on Jun 25, 2021

hobotrashcanfires
Jul 24, 2013

Well now some of y'all may look upon me and this post as some mere layman idly musing about things beyond my qualifications, but dang if it isn't a perfectly lazy afternoon for some idle layperson's A and/or B musement.

There's been an awful lot of conjecture about about the "Propaganda Model" almost as if it were some mathematical equation you can plug into media like an algorithm to automagically derive bias, manipulation, or drat near any and all hidden intent behind it. Not trying to indirectly cast aspersions, but it seems like one of the driving undercurrents of this entire thread is that such a thing is even truly possible.

It's been a fair few years since I read Manufacturing Consent, and even then it had been an even greater time since it was published. Perhaps due to my simple background and lack of formal higher education I didn't take it in as some rigid construct that could derive the truth or even the most truth of all things but instead a series of concepts and examples of propaganda and manipulation via media with an eye toward prompting the reader to reckon with the deeper motivations and ideology behind the information they consume. Very much a product of it's time and issues touched upon within it had then, and still have today rather poor historical documentation let alone some rigorous effort of understanding and accountability - should such your standards be I guess you have to throw out nearly every media and government source of information that exists. Sure my recollection could be mistaken and it was presented as something approaching the end all be all of deciphering the hidden aims of government and media disinformation, maybe I digested it with a critical eye by some curious stroke of luck!

The notion that there is any model, rigid framework, really any structure you can place on socially derived information that will reliably extract truth is absurd. It would be ridiculous a thousand years ago, a hundred years ago, the media landscape of today would just be exponentially less possible. The question is whether there are ideas and concepts within Manufacturing Consent that are applicable to understanding the media that is presented to us in the modern day and pretty sure that's a yes.

No one has or likely can build any kind of model or construct that will ever come close to universally or even reliably applying to social information with any confidence. The best we will ever have are ideas and concepts to critically examine a forever morphing media landscape. But you have to apply ideas and concepts and thorough examination that can often extend well beyond what's in question but to culture, psychology, ideology, motive, any and all intents explicit, implicit, blissfully unaware. There is no panacea. Sometimes it's obvious, sometime's it's anything but, more often than not it's just a bunch of cogs twirling about inside a poo poo machine whose only understanding is their life, career, and prospects could be pulverized if they counteract inertia.

piL
Sep 20, 2007
(__|\\\\)
Taco Defender

Epinephrine posted:

The reason why this thread is important is

It is interesting to me how the conversation flipped from 'whether something is important to the thread' to 'whether the thread is important assuming these ideas about the thread are true'.

Epinephrine posted:

because we want to look at articles and make good inferences about what happened based upon what was reported and we need a rigorous framework to do that well. Building a framework to make better inferences is the purpose of this thread.

For certain definitions of rigor--ones used in this thread--this is impossible and for even more definitions, this is infeasible. There is an entire field of philosophy dealing with the subject of knowledge and truth, and we're not going to crack that code here. To try and defeat the red herring of 'falsifiable', I'm going to try and create the simplest model to do this very thing: predict the accuracy of reporting, and hopefully I can illustrate how it will necessarily begin to unravel at the edges.

Lets start with one of the simplest possible models: a linear model for how true an article is (i.e. how accurate it is).

Let


That y-term should be offensive. I'm trying to measure truth. Scientists don't measure truth. They make observations. So to get to a falsifiable model this way, we'll need to define truthiness in a way that I promise will be unsatisfactory. But lets explore that, how could we do that?

We might choose to define truthiness as what percentage of statements made by the article is accurate. We're going to hit a wall here, because any individual article is comprised of many statements, and really each statement can be made of many truths, falsehoods. And that's not counting for the portions which would be interpretations, suppositions, opinions.

So then we could try to approximate truthiness with perceptions of truth. That's a little more achievable. You need to figure out who or what is doing the approximation.
  • You could make it everybody (though not everyone will want to participate in your sampling method).
  • You could make it a panel of experts and tailor your specific models to specific articles of interest to be defined within the fields of those experts.
  • Or you could rely on a trusted social group ranging from one to many

This will change the prediction your model, if successful can make from "whether an article is accurate" to "whether <group> would consider the article to have been accurate". This is useful, but already is a huge constraint and much more specific. This model could now only tell me if a certain group of people would likely agree with an article!

To actually execute this approach you might need to restrict the simple model quite a bit more. What if we categorized an article as either "agreeing with statement x", "disagreeing with statement x", or "having no position on statement x". In this case, we'll take statement x to be a statement on something that we are confident is true. My linear model doesn't work anymore, we'll probably have to utilize generalized linear models for meaningful predictions. At this point we can compare properties of an article (we'd also need to have somebody categorize those) and then try to find which properties of an article are correlated with agreeing with true statements.

It's important to note that what we're describing here is not "how true an article is" or "what properties are more likely to be held by articles which are true", but rather, "what is the relationship between properties an article has and whether those articles are likely to be considered accurate by members of my classifying population". Can this be generalized? Who should this population be for you? Is it the same for me? What does this mean for complicated classifcations such as "event x is good or bad" or "event x was a genocide or not"? This is basically the model of facebook/youtube if instead of measuring views you were asked whether you agree or disagree.

One more issue with this: if some beneficiary produced this model and implemented it into an accessible service, if it were trusted, there would immediately be incentive for subverting it. The more trusted it became, the more incentive to subvert it. See search engine optimization.

My intuition (unfalsifiable and without rigor as it may be), if you try to build this model some other way, you will find yourself faced with difficult if not unacceptable compromises far before you get something useful.



Inbound: frustrating recursion.

One non-predictive model we can use to examine this relationship with language and knowledge, would be as if each article is its own model. Each one has observations: (observations, opinions, mode of thought of the author and editors), has some sort of kernal that transforms these qualities (like the slope and y-intercept of simple linear model) via the writing process itself and the constraints imposed on the author, errors (whatever differentiates truth from the article) and the predicted variable i.e. the article itself. But each of these models are tested via exactly one set of observations and one set of predictions. Therefore there is no way to compare the accuracy of one vs another, even though we heuristically know that certain articles must be more true than others, and even come to conclusions about them.



This is why I think the requirement of a falsifiable or rigorous model is a red herring, and I think defining a requirement for it is going to keep you from discussing productive ideas. If you remove this requirement, you end up with some options.

You can still "improve [goons] ability to navigate the fraught modern media landscape" by discussing prescriptive models--models that tell you how to behave. You won't be able to prove their effectiveness, but you can set a series of guidelines that may reduce the likelihood of falling for mistruths without knowing by how much. Think about Personal Protective Equipment (PPE) requirements. Certain work sites may require that you always wear a hardhat any time you cross into an industrial area. There are times when this is silly, and you may know it's silly--you look up, you know you're the only one here. Nobody is doing any work, or you're in a low ceilinged hallway--the position you're in is no more dangerous than walking down the sidewalk, something can't bump your head right now.

But you don't need to actually do the math to know that if you're wearing the hardhat anytime you cross that line, then you must be wearing it whenever you cross that line and work is actually being done--you must be wearing it when you emerge from that hallway into an open area with work happening above you. Because the cost is acceptably low (putting on a hardhat), even though it's impossible to know exactly less likely you are to get your head busted open by someone dropping a wrench, we know that this risk will only be less and will be easier to enforce/enact than what it would be otherwise. This prescriptive model of behavior, a set of recommended conditions that you should follow, is not falsifiable. But it doesn't have to rely on acceptable levels of proof to potentially be useful.


You can also discuss models as descriptive, again, without them being predictive. Consider the model presented in the second post. While the Shannon-Weaver commnication model, in its original format is falsifiable, it's only practicably so as a description of signal accuracy. If you set your assumptions and definitions sufficiently narrow, you can make predictions with it--about digital communications, bit flips on harddrives, etc. But to test it on something as complex as a tweet, you'll need to overcome all of the challenges of the model we tried to describe in our model above, and probably some other ones. So in the context of it's usage here, it's non-falsifiable, non-rigorous and if the reason you believe it is true is because it's falsifiable in a different context, I'd argue you're being gullible.

Instead, it's useful as a descriptive model that can help elucidate some points about media. Much like a signal, media is going to start with an idea that is converted to a transmission, is transmitted, is received, and then observed and is vulnerable during this process to the addition of noise--in this way, we illustrate that not only are some authors are intentionally lying to you, but some may be unintentionally lying to you. This is an important consideration with media, and the model has value, even if it lacks rigor.

More recursion, because Godel Escher Bach. Actually, you would need to apply the model multiple times in order to approximate what is happening with the tweet--from <wherever thoughts come from> to formal thought, from thought to action, from action to transmission to a server, from processing within a server, to contextualizing and presentation via a feed, to receiving by a potential reader, to reading by the reader, to interpretation by the reader. Most of those are happening in the 'transmission' phase of the higher order version of this model, but also this model could be said to be happening many times over in each of these subordinate orders. For example, transmission to the server involves many sub transmissions from cellphone to tower, from tower to router, through many routers until it arrives wherever it's processed. Similarly the idea is doing the same thing within the minds of author and recipient. In the data transmission instances of this process, we've mitigated the risk of noise to almost zero, but we havn't done that in the sender and reader portions. And don't forget to multiply this by the fact that a tweet contains many ideas. And that since it's broadcast media, the same thing his happening many many times to account for each reader. Ugh, exhausting.

Consider George Box's likely apocryphal aphorism.

Maybe the best value here then is to not worry if particular models are strictly true, or even useful for making predictions about the accuracy of media, but rather to consider the a model (and its context) like you would a parable--is there a lesson to be taken from it? Does it help us understand media (even if not predict its accuracy), or does it give us some way of bettering our odds or avoiding hazards?

(I think I'm saying the same thing as hobotrashcanfires, but far less eloquently)

piL fucked around with this message at 02:35 on Jun 26, 2021

Epinephrine
Nov 7, 2008

piL posted:

[a long post]
OK so this is a lot of post, but I think you're missing the mark on what is meant here. Let's take one of your useful questions from your previous post for example:
"Is this clickbait?"
This heuristic assumes a testable and falsifiable proposition: that clickbait titles are less likely to accurately summarize the article or what actually happened than non-clickbait titles. We can probably agree on what is and is not clickbait and we can operationalize that definition into some concrete criterion or metric. Knowing that, we can now see over time whether this holds up. We can do this because there are some clear differences between what is and what is not clickbait. That's all we need it to do, and that's all we want.

hobotrashcanfires
Jul 24, 2013

piL posted:

(I think I'm saying the same thing as hobotrashcanfires, but far less eloquently)

As both a high school and community college dropout, that was a more technically eloquent shredding of the bizarre idea to even attempt to apply anything remotely close to a true/false algorithmic framework to social structures of information than I ever could have attempted. It's literally impossible to bucket humanity's social, cultural, psychological, and ideological (maybe even a few other Al's in there) into any coherent and agreeably true model for anything which is thusly true for all we produce.

I suppose I just hope folks realize how they may have been treating and thinking about this issue and how it's all a bit of a microcosm of what you have to do analyzing and critiquing media.

There is absolutely room for a thread like this to examine the information our society produces. However, if the aim is to pre-determine what is and isn't admissable to even discuss purely at face value by some decided upon (not seeing a whole lot of mutual agreement here) framework then it simply is manufacturing consent by the prevailing or perceived authority of the forum and will fall afoul as much if not more than any other model.

Is the Washington Post not the Amazon Washington Post simply because Trump leveraged the disastrous way capitalist wealth has completely taken over media? Are they good because Trump bad? Is the NY Post actually revealing the contents of Hunter Biden's laptop? Is it true, if true, does it matter? Why?

Any of these things, true, untrue, re-shaped into various other media whether organically, corporately, state-based or some other political motivation it entirely changes what is and isn't.

Imagine thinking you know what is a lie, and that discussion revolving around lies should be relegated elsewhere in the era of Q. False can absolutely matter and sometimes even more than true.

You have to engage with the true and the false because both shape reality. It's healthy to tell people to shut the gently caress up for posting garbage, it's healthy for people to be told to shut the gently caress up in response for being wrong. Sometimes such exchanges can be brief and obvious, sometimes they take more digging into. That's okay, that's debate and discussion. It's almost never pretty. Mandated civility only results in high-minded assholes running discussion and I gotta say bit of a problem with that one.

Maybe just don't hold this forum in such high regard, it's history truly does not deserve it. It's not the worst and has at times borne such fruit.

Epinephrine posted:

OK so this is a lot of post, but
"Is this clickbait?"

I don't think you understood that post at all. Is something clickbait, is it full of poo poo? Say so, pretending it doesn't exist does not make it not exist. There is no and need be some heuristic model that says so. Such a thing if it ever exists would throw errors constantly if it worked properly until whoever built it clarified what they personally believe is good or bad.

This thread could be so good if it took any one of the innumerable interweaved problems our society has and started picking it apart.

E: rum

hobotrashcanfires fucked around with this message at 06:20 on Jun 26, 2021

Sekhem
Feb 13, 2009

Slow News Day posted:

This reads like an attempt to have your cake and eat it too: when people (including those defending PM) point out that PM is very bad at predicting or explaining individual media coverage, you respond with "well, of course: that is not its proper or intended usage"
I think you're just fundamentally misinterpreting me, honestly. I've never stated that that it's bad at predicting or explaining individual media coverage. I've stated that it's bad at predicting or explaining the issues with the bare empirical facts of media coverage. But pieces of media aren't some simple valueless cataloging of empirical facts, they're laden with specific values, framing, narratives and attentions. This is where I'm claiming the PM is able to help us with predicting or explaining individual media coverage.

Sekhem
Feb 13, 2009

Epinephrine posted:

The reason why this thread is important is because we want to look at articles and make good inferences about what happened based upon what was reported and we need a rigorous framework to do that well. Building a framework to make better inferences is the purpose of this thread.
But "good inferences about what happened" doesn't necessarily mean wading into the domain of validity of direct empirical facts or data. Again, that's the work of scholars, not forums posters and media consumers. I'll leave it up to the reader to decide how successful Brown Moses was at getting to that level of analysis, but it's simply not media analysis and not something that's going to be realistic for random goons to conduct on a regular basis.

What we can do as media consumers is remove the various filters of framing, narratives, conceptual categories and relative proportional attention etc. in order to answer the very important question of "what is the basic facts of the bare data that is being reported to us?". Once that's resolved, we can use our tools of political analysis to make judgements about the empirical claims being made - but this takes us outside the remit of media analysis. It's not relevant to this discussion.

Epinephrine posted:

So that's the thing: It did happen in other cases, in fact it's what Herman does in every case. In every case, he rejects Western reporting on the basis of it being propaganda and uses that space to insert fringe ideas. I harp on Rwanda because it's such a clear example of the point, but he plays the same game in his discussions of Bosnia, Kosovo, and Darfur.
Sorry, I don't think this is a real response to my argument - yes, I'm aware that you're alleging he used this process it in a number of different cases. But this number is still just a fraction of the total number of cases he assesses, not universal. He still discusses instances where western media is propagandistic but factually accurate, hence the "worthy/unworthy" victims distinction. If your order of operations can't explain what led him to make these distinctions, then I don't think it's useful at all.

Ytlaya
Nov 13, 2005

hobotrashcanfires posted:

Sometimes it's obvious, sometime's it's anything but, more often than not it's just a bunch of cogs twirling about inside a poo poo machine whose only understanding is their life, career, and prospects could be pulverized if they counteract inertia.

Yeah, this is basically correct. There are a ton of incentives for media organizations to cooperate with (for example) pushing state-supported narratives and disincentives against reporting that angers powerful interests/stakeholders. There will obviously be exceptions, and there are occasionally conflicts between different stakeholders (a media's owners might disagree with the current presidential administration, for example). But broadly speaking, a media organization usually isn't going to want to report on things that their owners, funders, or major sources (namely the US government) dislike. There's generally no need to be heavy-handed with this, since most of it is simply maintained through inertia, as you mentioned. If you're a reporter, why rock the boat when you can instead stay friendly with the White House Press Secretary? After all, you might lose access if you don't cooperate. It's certainly what I'd do if I were in their shoes and just wanted to live a comfortable, easy life. But more often than not, people who rise up within these organizations are usually simply going to have personal views that don't conflict with owners/etc in the first place, so there's little need for something like top-down censorship. Someone from an upper or upper-middle class background who went to an ivy and then became a reporter at the NYTimes (or whatever) is likely going to already have a worldview that isn't threatening.

As a side note, the article Thorn Wishes Talon mentions in their post is - ironically - actually a very good example of "media reporting that you should absolutely not consider trustworthy." It's representative of a LOT of reporting, particularly on issues connected to foreign policy. The article in question literally just references a "US intelligence assessment" and quotes a Pentagon official. That's about as close as you get to "media just acting as a government mouthpiece." It's not even laundered through the system of NGOs that exist to add extra legitimacy to this sort of thing. This doesn't mean that the conclusions of that article are false. It just means that it shouldn't seriously factor into your own opinions about the issue in question.

piL
Sep 20, 2007
(__|\\\\)
Taco Defender

Epinephrine posted:

OK so this is a lot of post, but I think you're missing the mark on what is meant here. Let's take one of your useful questions from your previous post for example:
"Is this clickbait?"
This heuristic assumes a testable and falsifiable proposition: that clickbait titles are less likely to accurately summarize the article or what actually happened than non-clickbait titles.

No, that's my point. Not only would our definition of clickbait probably be meaningless to people who aren't piL or Epinephrine, but if it were somehow meaningful to people and properly applied, then we're going to run afoul of Heisenberg. My attempts to perceive will affect the system. Our system of trustworthy metrics for determining clickbait material, if it could exist as something more solid than 'articles Epinephrine and piL agree are cickbait', if it gained traction as acceptable truth, is exactly the system that clickbait generators are incentivized to subvert. Our truth would be time-limited.


Epinephrine posted:

We can probably agree on what is and is not clickbait and we can operationalize that definition into some concrete criterion or metric. Knowing that, we can now see over time whether this holds up. We can do this because there are some clear differences between what is and what is not clickbait. That's all we need it to do, and that's all we want.

Not quite. First of all, the point I was refuting was not that you could find no point from finding falsifiable premises, but that falsifiable premises were the only ideas of value to the thread.

Second, while you and I can come up with a falsifiable premise about whether we think something is clickbait or not, we cannot come up with a falsifiable premise about what is clickbait or not.

But OK, I believe in models even when they're wrong so I think there is value here still. Lets work through it. How do we determine this? I see two ways to begin to tackle this question.

1. What do you think is clickbait? How do we operationalize that into a definition and support that particular method?

2. Ignoring your own opinions, how do we determine an appropriate the appropriate conditions of clickbait?

Thorn Wishes Talon
Oct 18, 2014

by Fluffdaddy

(and can't post for 10 days!)

hobotrashcanfires posted:

I don't think you understood that post at all.

To be honest, I don't understand piL's post either. It is a night-and-day difference compared to their earlier post in terms of clarity and conciseness. You could tell me it was written by another poster and I would probably believe you.

What it appears to be is an elaborate attempt to refute a strawman. piL seems to have latched on to what Epinephrine said about expecting a good model to help us make inferences about what happened based on what was reported, and the main thrust of their counter-argument, and what their word salad boils down to, is that it is impossible to determine or measure truth. But that is completely irrelevant because we are in fact not talking about epistemology here. We are talking about the reason why models exist at all: to act as frameworks that help us understand the world better, and to explain events as we observe them. Expecting a model to be falsifiable is not the same thing as expecting it to help you discover The Truth. But a good model should absolutely help you understand and make sense of new occurrences.

In philosophy of science, something being falsifiable means that there is a set of logically possible observations that contradict it. So what we mean when we say the Propaganda Model is not falsifiable is this: it passes any and every test thrown at it. This is not because it's an exceptionally strong and robust model, or that we haven't yet made any observations that contradict it (on the contrary!). Rather, it is because it is built out of "catch-all" clauses. For example, one of them states that if any media outlet publishes something that might go against the interest of elites, that just means the elites don't actually care about that topic. Another states that if the media reports something that goes against the interests of the elites on a topic the elites do care about, that just means the outlet (or the author, if it is an editorial) is marginal and lacks influence. The entire thing is catch-all clauses all the way down. As a result, any set of inputs fed into the Propaganda Model result in the exact same output: "Western mainstream media has an imperialistic bias." And if any contradictory examples are thrown at it, such as the BBC article linked earlier, one or more of the catch-all clauses are activated: elites don't actually care about immigration, BBC is mainstream but it's also not American and therefore has more freedom to criticize American immigration policy, etc.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
evilweasel and others already repeatedly articulated the problems of the PM, which were also given at the beginning of the thread when some of the same users trying to promote it now made similar generalized attacks on media literacy. I'm going to summarize some these issues as they appear to me. This is not exhaustive, but it articulates many of the root problems with a model of “everything and nothing”, including its harm to good faith discussion.

1. Fuzzy lenses The "lenses" which serve as the primary formal components of the model aren't clearly articulated and lack boundary conditions. Some of the lenses consist of separable observations on forms of media influence, which are long-held and trivially true under some circumstances, and do not fall within the confines of the model. The PM did not discover access journalism or advertiser conflict, and these aren't functions exclusive to the settings the authors describe. I promise, flak is not exclusive to a corporate mass media ecosystem! These individual elements are made less useful by their muddled presentation in the PM. If you want to, as Ytlaya does, point out that an article has one new source and it's on background, then, great, that can be helpful in scrutinizing the piece and its context (Someone remind me to work up a short post on attribution practices sometime). That's not the PM though, and the PM doesn't help you identify that issue or its context.

2. Selective evidence and no testing - Herman and Chomsky's cited evidence for the PM is, to put it charitably, selective. For example, some samples are drawn exclusively from the New York Times on a single issue, or lean heavily on abuses of the Reagan administration. The authors do this because it's easy; it makes the conclusions of their work appeal to their target audience, and the media abuses often aren't in doubt. These case studies and narratives do not actually serve as strong support for their broad claim (and it truly is an extremely broad claim). A stronger model would hold well outside these settings - actual tests of the model's applicability, with limitations and consistent criteria of evaluation. The authors are not interested in articulating limitations or boundaries of their ideas; they're interested in promotion.

3. Inconsistent application - The model prevaricates on whether it can inform the interpretation of individual pieces of media. The authors want this both ways because it renders the PM and those employing it immune to criticism. In practice, the reasoning of the model is constructed from specific to general- a group of examples (selective ones whose interpretations appeal to the reader's prior beliefs) are deployed to make a general (across all mass media) claim.

4. Too many variables Breadth of explanation is not a benefit. Using several overlapping lenses that may or may not be applicable to individual cases or broader narrative contexts means that the PM is infinitely versatile; some part of it can be deployed to explain any message. The result is the equivalent of an overfitted statistical model; some lenses are redundant, the model will attribute meaning to things that don't matter, and is less informative than an alternative that doesn't claim such a broad scope. A detailed, specific accounting of, for example, different forms of advertising pressure, the details of how it is done, and where it's more or less impactful, is more useful than a "lens".

5. The elite interest loophole - Conversely, the most significant boundary condition for the model is the interests of the "elite", which are variously referenced as the political parties, corporations, and their managers. The authors assert that the systems of control presented by the model fail when there are disagreements among the elite, and the extent to which other groups in society are interested in, informed about, and organized to fight about issues. But how can users tell the interests of the "elite"? With such a wide-ranging and nebulous definition of elite interests, there's no way except by working backward from the media under examination. So if you want to believe that a media message reflects the manipulation of the elite, then it does, and if you want to believe that it doesn’t, then it doesn’t. Whether the model applies is based on the desire of the user to assign interests and control to nebulously defined elites. As someone observed earlier, it's like reading the will of God into weather events. Is this article or media narrative the way it is because of the delegitimizing propaganda control of the elites? Is it because the elites are in conflict? Is it because the elites don't care? Or maybe those dastardly elites are inflating the opposition, pretending that the marginalized non-elites have more strength than they really possess? The interpretation and application of the model depends on what the user wants to believe, rather than what is. This is a really unhealthy relationship to information.

6. Proof and Faith - I disagree with others that the model is delegitimized by its authors merely dabbling in genocide denial. The problem (articulated well by evilweasel) is you can use the same model to argue simultaneously for and against the presence of elite media control in any specific circumstance, as well as argue toward any interpretation of media. Presented with the same information, the model can be used to say that a media narrative is propaganda or not propaganda, legitimate or illegitimate, true or not true. At root, the propaganda model of mass media is a cipher that encourages users to believe whatever they want by giving them the illusion of insight. It combines well-known preexisting information about the media to spin an overarching and uninformative mythology that panders to its target audience’s preferences. Users of the model become less interested in engagement with specific information about the media under discussion - it functionally makes them less media literate. Because PM users can ignore or bypass specific causal or contrary information to argue generally from the intentions of the "elites", they become resistant to contrary information. This also makes people who deploy the model uninterested in good faith discussion; unfalsifiable claims of wide-ranging propaganda control can't be reasoned with.

All of these problems are why I said the following in the OP materials:

quote:

A core issue with many people’s approach to media literacy is they think of it as finding a single, true lens through which to understand information and the world- a rule or worldview or rubric that they can use to decide what sources are good or bad. This is often couched in the language of universal skepticism, or seeing through the “mainstream media.” “I’m skeptical of every source” and "all media is biased" is bullshit. No one can be skeptical of every source equally, and all too often it means rejecting good sources that are just communicating challenging or unappealing information. Taking these positions actually makes a person even more vulnerable to disinformation, because disinfo campaigns actively target such individuals and prey upon their biases. The Intercept article I cited above OANN will both tell you- they will give you the stories no one else will.

Similarly, a single theory (including, or even especially, “crit” theories that provide an overarching narrative telling you what sources are good or bad) will instead steer you toward messages that appeal to you for all the wrong reasons. There’s a reason these posts are a bunch of material pulled from different sources- a toolkit will make you much more intellectually versatile than a single mythological correct way to understand media.

I wrote that with the PM in mind, and the resulting cudgel approach to media that it entails is what people got probated for earlier in the thread. I wanted to wait to tackle the PM and similar mechanisms of media illiteracy until after we'd worked through a lot of more basic material. There are many more specific issues I could raise with the model (agency attribution and conspiracy, mass versus capital media as condition, implicit warrants, alternative models, misrepresenting other authors), but I'd much rather get back to the my planned effortpost on Albert Hirschman's book on reactive and progressive rhetorics, a thing flaks actually know and use in trying to influence public opinion on policy. It’s old, it’s got issues, its examples are all in political history, but people can directly apply it to a source and draw meaningful conclusions- including sometimes that the author has read the book and is deliberately using it to write a persuasive message!

piL posted:

What? I've lost track of the thread because I don't care about the PM and I'm not going to read multiple books to find out more about it and make informed judgments about its implementation. Is it really a point of agreement that only falsifiable models are of value to a thread on media analysis and communication? Shannon-Weaver, as applied in post two of this thread is done so in a manner that would be unfalsifiable. It makes no predictive claims first of all, but to use it to make predictive claims about media intent and interpretation (vice signal accuracy) would require you to narrow a question so greatly as to be absurd.

Per the OP’s introduction, SW is a model of communication; it's a simplified representation that explains one set of relationships by sacrificing detail elsewhere. The example in the OP isn't real and isn't a demonstration of applying the model to media. It's intended to illustrate what the parts of the model are, in the same way that a classroom map of the state won't help you get across town. My principal goal in writing up the model was to provide a functional vocabulary for further discussion. Toward this end, and in keeping with the pluralist approach I describe in the OP, I do my best to be clear about any limitations or simplifications of the materials I provided.

At the same time, SW is an extremely falsifiable model. Alternatives to the relationships it describes can be articulated and tested. The relationships between concepts provided by the model are necessary to it. The relationships between parts of the PM are not. The lenses do not categorically apply in such a way that the pattern of relationships can be falsified. What's made SW remarkable is how universally it has held; its conceptualization of information as a stochastic error space is a foundation of all modern communication and information technology. (This is one of my favorite facts about the model, because it's a fully parallel expression of the logics of falsifiability in an applied setting).

piL posted:

There are entire swaths of questions a person could try and should try to ask about media that are by their very nature unfalsifiable without very rigorous and narrow definitions of all of the terms that would greatly reduce practical value.

  • Is this article well written?
  • Is this source trustworthy?
  • Is this article true?
  • What types of sources are trustworthy?
  • What are some ways to notice that I am being manipulated by media?
  • What rhetorical techniques should be considered appropriate in a particular format and which should cause doubt in the reader?
  • Does this collection of articles on a subject represent sufficiently diverse range of opinions to ensure that I'm well-versed on the arguments?
  • Is this an appropriate type of media to make and support this claim?
  • Should I spend $10 to access this media?
  • Does the funding source of a content generator affect the trustworthiness of the generated content?
  • Should I trust this content funded by this source?
  • Is this clickbait?
  • What is the author's intent?
  • How did the publication of a particular piece of media affect a particular situation?

All of these seem like appropriate discussion points for this thread and none of them have any place in any falsifiable model without defining very restrictive terms. Prescriptive models that address these questions could be generated or referenced and could be of value to this thread. They would by necessity be unfalsifiable and would be inappropriate for establishing claims of causal relationships or making prediction.

This list is a bit of a mess of prescriptive and descriptive questions ("is this article true?" is an empirical question that, yes, I think we can specifically interrogate). I provide tools to begin to address some of these questions in the OP material. These tools are useful because they do make causal claims and are based in defined terms or explanations. As Peirce, and Popper, and Shannon, and Weaver will tell you, information is useful to the extent that it can be falsified, to the extent that it is open to error.

fake edit:
Since I drafted that post you've expended a whole lot of words to indicate you're not familiar with the distinction between naïve and sophisticated falsifiability. This model example you're presenting is, uh, creative, but has little to do with what's being discussed. We're not trying to solve the problem of induction here, and no one is holding PM to anything like that standard. We also do not have to pretend that all truthfulness is relative to observers in order to make specific observations about the mechanisms of specific media. The PM makes descriptive claims- it just does so poorly, for the reasons articulated many times over. Prescriptive claims have to rely on a factual substrate or, again, if they don't,

fool of sound posted:

Generally if people have decided that they are opposed to truth that is deleterious to their ideology they should probably stay out of this thread and preferably subforum.

The PM does not meaningfully inform prescriptive behavior unless you want to just argue against any media that exists under capitalism or in a political context. The people making that argument look like this:

Sekhem
Feb 13, 2009
Fwiw I disagree with the arguments that are excluding falsifiability as a meaningful factor in this field. I actually think some of the benefit of the PM is that it's very clear and actually makes standards of falsifiability pretty easy to extract.

Iirc Chomsky in MC and his related books on media directly puts forward tests of its hypothesis. Given two events of similar character, the political economy of media outlined in the PM is able to predict the quantifiable metrics of disparity of reporting in a statistically significant aggregate sense.

Thorn Wishes Talon
Oct 18, 2014

by Fluffdaddy

(and can't post for 10 days!)

Sekhem posted:

Fwiw I disagree with the arguments that are excluding falsifiability as a meaningful factor in this field. I actually think some of the benefit of the PM is that it's very clear and actually makes standards of falsifiability pretty easy to extract.

I have no idea what you mean by this. Can you elaborate? What do you mean by "makes standards of falsifiability pretty easy to extract"?

Sekhem posted:

Iirc Chomsky in MC and his related books on media directly puts forward tests of its hypothesis. Given two events of similar character, the political economy of media outlined in the PM is able to predict the quantifiable metrics of disparity of reporting in a statistically significant aggregate sense.

This is a bit of a surprise, to be honest. I wasn't aware that PM can make these types of statistical predictions. Can you provide examples?

hobotrashcanfires
Jul 24, 2013


How about you demonstrate how you're actually an authority on the subject in any way whatsoever instead of whatever the gently caress this is.

Thorn Wishes Talon posted:

To be honest, I don't understand piL's post either.

All they did was put all the model chat into actual modelling chat.

Guess what? Models to describe what some demand are literally not possible. It's kinda funny some book from 40 years ago must meet muster that nothing can.

(USER WAS PUT ON PROBATION FOR THIS POST)

hobotrashcanfires fucked around with this message at 08:08 on Jun 26, 2021

Slow News Day
Jul 4, 2007

hobotrashcanfires posted:

How about you demonstrate how you're actually an authority on the subject in any way whatsoever instead of whatever the gently caress this is.

This is grossly inappropriate.

I AM GRANDO
Aug 20, 2006

Discendo Vox posted:


The authors want this both ways because it renders the PM and those employing it immune to criticism.


I think some of the heat your dismissal has been getting comes from the imputation of bad faith to the authors of the book. What convinces you that the authors have this specific goal and aren’t to be taken at their word?

hobotrashcanfires
Jul 24, 2013

Slow News Day posted:

This is grossly inappropriate.

Really sorry to transgress your bizarre threshold which apparently contains no debate or discussion, just an authority who has no obligation to actually partake in said conversation.

I am very rude.

Slow News Day
Jul 4, 2007

Antifa Turkeesian posted:

I think some of the heat your dismissal has been getting comes from the imputation of bad faith to the authors of the book. What convinces you that the authors have this specific goal and aren’t to be taken at their word?

The fact that they both have engaged in repeated genocide denial, both in MC and their previous and future works, using the Propaganda Model or its various bits and pieces. In other words, the fact that the authors repeatedly misused their own model or its components to deny genocide (on the basis that any coverage of events by Western media was labeled as genocide, and was therefore propaganda and false) is evidence that they created the model to legitimize and support their preexisting biases.

We have covered this already, I think. No need to re-litigate it.

Fritz the Horse
Dec 26, 2019

... of course!

Antifa Turkeesian posted:

I think some of the heat your dismissal has been getting comes from the imputation of bad faith to the authors of the book. What convinces you that the authors have this specific goal and aren’t to be taken at their word?

I dunno maybe the multiple genocides Herman and Chomsky have done a lil' victim-blaming on? Don't worry, they weren't actually using their published framework where they discount Western news media as propaganda of the capitalist elite. So it's okay that they denied the Rwandan genocide, because they weren't actually using the academic framework they published multiple books on.

Like I've kinda just been following this thread from the sidelines but it's insane that we're seriously trying to "well, actually" genocide denial.



Presumably Chomsky and Herman are serious academics. Why the gently caress should we engage seriously with their framework on media and propaganda if it repeatedly results in not only denying horrific genocides, but blaming loving mass murders on the victims.

They have written books on this poo poo. You're telling me that I should listen to the genocide-deniers because while they're very serious academics on how we should understand mass media, they did some whoopsies wrt Serbia, Cambodia, Rwanda, etc and we should just ignore those?



Bottom line for me: if you claim to be a serious academician on [subject], you aren't allowed to gently caress up publicly, repeatedly, on [subject] and publish books where you deny multiple genocides and blame them on the victims.



For chrissake: how many genocides do you need to deny for your work to be thrown on the trashheap of history?

Adbot
ADBOT LOVES YOU

Sekhem
Feb 13, 2009

Thorn Wishes Talon posted:

I have no idea what you mean by this. Can you elaborate? What do you mean by "makes standards of falsifiability pretty easy to extract"?
It's not a very abstract argument I'm making, I'm just saying it's a very transparently presented model which makes it easy to identify what hypotheses can be testable and how that would be accomplished. I'm sure we've all encounted scholarship in related fields that rely on ornate language or elaborate conceptual edifices that make extracting these pretty difficult.

Thorn Wishes Talon posted:

This is a bit of a surprise, to be honest. I wasn't aware that PM can make these types of statistical predictions. Can you provide examples?
I'm not sure why this would be surprising, because it's pretty upfront. I have been generally pretty confused about the claims of vagueness or unfalsifiability that are being leveraged here, because in my memory MC is a very clear and upfront work where such hypotheses are very frequently elaborated. I've had a cursory reading of the debate and critique surrounding the PM, and I've basically seen nobody attacking it under these terms - the critical responses to the PM generally take it as providing testable hypotheses but have a negative judgement on the empirical success of those tests.

Here's a quick example that directly attempts to test it, but really any look at the secondary literature and debates should make these kinds of predictions clear.
https://nacla.org/news/colombia-and-venezuela-testing-propaganda-model-0

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply