Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
echinopsis
Apr 13, 2004

by Fluffdaddy

ADINSX posted:

This is a good way of explaining it and I think another thing to point out that actually makes it intuitive to me is that: if you always change your answer after the door reveal the only way you could lose is if you picked correctly on the first guess. The odds of that happening are 1/3 so your odds of winning with this approach are 2/3

:chanpop:


reminds me of the very counterintuitive thing where if you have a group of people and how many do you need on average before the same birthday crops up twice and it’s far less than you’d imagine

Adbot
ADBOT LOVES YOU

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
birthday paradox is always straightforward if you know landau notation. possible birthday matches grow as O(n^2), num peeps grows O(n)

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

bob dobbs is dead posted:

birthday paradox is always straightforward if you know landau notation. possible birthday matches grow as O(n^2), num peeps grows O(n)

load bearing if

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
a true iff, with that second f bein a thing

Midjack
Dec 24, 2007



bob dobbs is dead posted:

birthday paradox is always straightforward if you know landau notation. possible birthday matches grow as O(n^2), num peeps grows O(n)

you don't even have to know the notation to understand it.

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
someone wanted to see mertonon in this thread iirc. i cut a pre-alpha release

https://github.com/howonlee/mertonon

Shame Boy
Mar 2, 2010

pro-click but mostly just for the readme

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
if you liked the readme you'll like the q and a sections starting with "is this gonna lead to a grey dystopian hellhole" and "is this gonna automate me out of a job"

also please post it in postingy places if you're feelin it

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

Shame Boy posted:

pro-click but mostly just for the readme

Oysters Autobio
Mar 13, 2017

bob dobbs is dead posted:

someone wanted to see mertonon in this thread iirc. i cut a pre-alpha release

https://github.com/howonlee/mertonon

this is really cool! It's really giving me vibes of what the People's Republic of Walmart was describing, though that could just because maybe that book used one of the same quotes as you did about Martians. Not sure.

My only specific advice is related to docs/usage.md

Your scenario is a little too abstracted, specifically when you're walking through explaining the creation of nodes, you're using examples which are so abstracted from any domain that it's tough as a non expert to sort of grok an idea of how I would use this for my own problems. You stated earlier that your goal was to provide a system that didn't require operations research specialists to run, so for your usage readme it would be far better to choose an internally consistent domain application of nodes rather than broad concepts of "cost centres" and such. As a non expert, I'll have a far easier time accepting the analogical limitations when translating across domains to grok the idea and features of the app without needing to wade through abstraction.

I get that you might not want to dissuade users by giving too domain-specific of an example, but because this is already such an abstract endeavor it might be worth replacing the "Mimblzopper Cost Centres" with like, a logistics company or Uber eats ice cream delivery or whatever you have the most domain knowledge on.

Though you maybe risk dissuading people not in that domain, it's far easier for people who see something really interesting in this product translate their use-case if it's from a known alternative domain.

Like if I'm interested in using this for say allocating software engineers in a B2B org, I'd still have a far easier time reading an example of "Mimblezop, Rigman & Grugdek Partners" law firm for allocating lawyers to cases / firms or whatever than a complete domain agnostic abstraction detailing responsibility centers and cost centers.

Even though the domains are completely different, at least I could grok how the usage would work by making some approximate comparisons between examples.

Whereas with abstracted "Mimblezop Cost Centres", counterintuitively I find it hard to go from abstract -> concrete than concrete -> concrete, even if I have to make big leaps and bounds in my analogy mapping.

If you whipped up a similar walkthrough but for something domain specific, you could throw that at the top of your readme because it would serve as a far better intro / overview of your app than diving straight into your inspiration and design methodology.

Same advice I think for your leading examples and use cases in your intro. It talks about "doohickies" and such would be far better if you rewrote them into a few different concrete domain specific examples. You can later explain if you want and map out how "Ice Cream Sales" was an example of a responsibility center and "Ice Cream ITSEC" was an example of a cost center.

People generally are willing to accept that the sort of arbitrary and artificial examples given are very simplified examples that are just scratching the surface of potential usage.

Oysters Autobio fucked around with this message at 16:29 on Sep 3, 2023

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
someone put up an issue saying shits too abstracted yeah, i dunno if that was you. im writing a less abstracted one

the only domain where i can up and call ... 30ish.... c-levels and leadership and have them kinda listen to me maybe is software saas land, so i guess that'll have to be it

MononcQc
May 29, 2007

This week I decided to read an old classic essay from the 70s, which then gathered hundreds of citations; it's the kind of text whose ideas are so ingrained into modern theory about organizations and political groups that I believe I knew many of them without having read the original. It's therefore a great time to go read the source material, specifically Jo Freeman's Tyranny of Structurelessness.

The text was written as a comment on how various feminist groups early on would take a "leaderless" or "structureless" approach, often as a counter to established political and patriarchal groups: the looseness encouraged discussion and participation, but the author asserts that often, little more than insights would come out of these groups. In this essay, she covers what exactly tend to be the problems around structureless groups—which I think is what people quote a lot when referring to the tyranny of structurelessness as a catch-all phrase—and then ways in which groups could more effectively be structured for democratic purposes without necessarily replicating existing structures. This latter part, a much shorter one, seems to be remembered a bit less.

But let's start with the problems. The big obvious one is that there is no such thing as an actually structureless organization: any group of people that comes together for any purpose eventually has some structure emerge. Any interaction, conversation, skill, distribution of tasks, or variations in power will end up creating an implicit, possibly flexible structure that may change over time.

The issue, then is that a group aiming to be structureless:

quote:

... does not prevent the formation of informal structures, only formal ones. [...] Similarly "laissez faire" philosophy did not prevent the economically powerful from establishing control over wages, prices, and distribution of goods; it only prevented the government from doing so. Thus structurelessness becomes a way of masking power, and within the women's movement is usually most strongly advocated by those who are the most powerful (whether they are conscious of their power or not). As long as the structure of the group is informal, the rules of how decisions are made are known only to a few and awareness of power is limited to those who know the rules.

Basically, the assertion here is that as you eliminate the explicit control structure, an implicit, hidden one still exists, and can still exert itself, except that it is not bound by clear rules, and there are very few ways to know it is even there. An implicit structure therefore may in practice limit open participation, rather than encourage it. Particularly, while formal structures won't destroy the informal ones, it will prevent them from becoming dominant, and will have tools to "attack" them if they aren't acting in the interests of the group at large. The author repeats, however, that most organizations with an explicit structure still contain many implicit structures within themselves.

Jo Freeman takes this opportunity to define what Elites are about. Specifically, she points out that elites are not individuals; they are instead small groups wielding power over larger groups they have no responsibility toward, and who often does not have their consent or knowledge to do so. They are not conspiracies, and she describes them as "groups of friends who also happen to participate in the same political activities." Many such groups may exist within a larger one, and so organizations can have multiple elites jockeying for power at the same time.

She mentions that elites are often more formed based on background, time, or personality than actual talent, competence, or dedication to the cause. The former is how you make friends whereas the latter is what an organization needs. Once the pattern is established, looking for people who "fit in" when recruiting tends to sustain it. If you're outside the elite, the only way in is to find a "sponsor", become their friend, until they bring you into a sort of inner circle. By comparison, having explicit decision-making processes (requiring some structure) will make it easier for someone to participate if they are outside the elite(s).

For what it's worth, she mentions that the elites aren't inherently bad, they're just inevitable. They can be useful and do very useful things as well. In structured organizations, these elites are less likely to govern than they are in unstructured groups. Particularly, since they haven't been put in power by anyone in structureless groups, there's also no one who can take their power away. They may try to act responsibly to keep their influence, but it is at their own will and interests that they may be: the group can't demand it of them.

The author also looks at the concept of "stars", volunteers or people who become very popular with the public or the media. She states that this is a sort of natural outcome, because the public expects a spokesperson to represent a group:

quote:

But because there are no official spokespeople nor any decision-making body that the press can query when it wants to know the movement's position on a subject, these women are perceived as the spokespeople. Thus, whether they want to or not, whether the movement likes it or not, women of public note are put in the role of spokespeople by default.
This is due to external expectations, and when these de-facto spokespeople are "chosen" by the media, they become "stars" who risk being resented by the people inside the group or movement, and they in turn risk feeling alienated and eventually leave the cause. By not having any mechanism to choose their spokespeople, the group implicitly lets the public decide, and they lose any control over it.

Other problems happen when people get tired of "just talking" and want to turn to political action; many structureless groups are incapable of it. Those that can function often have some very specific properties:
  1. The group is task-oriented, and the task determine what needs to be done and when (e.g.: "organizing a conference")
  2. It is small and relatively homogeneous; it has a common language, which limits confusion—diverse people interpret words and actions differently, which requires more discussion and repair.
  3. There is a high degree of communication, and larger groups often succeed if made of smaller groups with partial overlap (~5 people per small groups, maybe up to 15 for large ones)
  4. There is a low need for skill specialization, and anyone can do anything
The larger the group, the less likely it is for all these conditions to be met. Organizations that struggle to turn a group's motivation into action are likely to see their members poached by other groups who can better harness the motivation of individuals. They can also be "infiltrated" by external groups wanting to take over, which may do so by establishing their own new elite.

Specifically for this latter case, the older, established elite has few ways of discussing it in the open to prevent it without exposing their own covert structure. Anti-elitism and calls for structurelessness are often the best way for them to go, while also trying to exclude their opponents, possibly by re-defining the existing purpose of the group to align with the existing elite or by re-defining the opponents as bad actors (for example, by Red-baiting). This, basically, means institutionalizing the elite's power structure, which isn't always possible. She adds that the less structured a group is, the less control it has over the directions in which it develops and in the projects it engages in:

quote:

If the movement continues deliberately to not select who shall exercise power, it does not thereby abolish power. All it does is abdicate the right to demand that those who do exercise power and influence be responsible for it.

I should point out that through the text, she mentions that none of these criticisms by any means imply that structured organizations are immune to these problems. They however may have defined means of dealing with them.

Jo Freeman then gets into the concepts required to properly structure power, without necessarily replicating existing (often problematic) structures. She mentions a need for continuous experimentation and re-structuring, with various approaches possibly being needed for various situations. The principles are:
  1. Delegation of specific authority to specific individuals for specific tasks by democratic procedures
  2. Require those to whom authority has been delegated to be responsible to those who selected them (any power is exercised at the will of the group)
  3. Distribution of authority among as many people as is reasonably possible (avoid monopolies and require consultation)
  4. Rotation of tasks among individuals (balance letting people develop skill and avoiding people "owning" tasks)
  5. Allocation of tasks along rational criteria (ability, interest, responsibility are key factors that should be kept in mind; developing new skills should be done through apprenticeships)
  6. Diffusion of information to everyone as frequently as possible. Information is power. Access to information enhances one's power (people can be more effective with more information and more power)
  7. Equal access to resources needed by the group (e.g.: someone having a monopoly over a printing press, specialized skills they won't train others in, or information they'll withhold can wield undue influence over the group)
(the bits in bold are literal quotes)

On these principles, she concludes:

quote:

When these principles are applied, they insure that whatever structures are developed by different movement groups will be controlled by and responsible to the group. The group of people in positions of authority will be diffuse, flexible, open, and temporary. They will not be in such an easy position to institutionalize their power because ultimate decisions will be made by the group at large. The group will have the power to determine who shall exercise authority within it.

To me, a lot of the comments ring true, and it's obvious that many of the criticisms applied to feminist groups in past decades apply pretty directly to corporate environments as well. I'm not surprised to see a lot of the discourse borrowed elsewhere (nor am I surprised to see some anarchists who dislike the text; others, including communalists seem to take it as evidence of a need for "federation"). I do appreciate the last bits on the theory of how to better apply power, and the call to experiment with structure more actively, rather than trying to throw it away altogether.

All in all, and if I can be personal here, I'm a bit relieved because it seems like the paper at least did not preemptively discredit my whole talk on feedback loops and complexity that has a whole bit on "nominal" vs. "emergent" organizational structures and the need to align actions on both levels when trying to enact change. It would have felt a bit embarrassing to give the whole speech in front of crowds to find out it had been proven to be wrong decades before I was born (which, of course, doesn't mean I'm right either).

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
this is social science, nobody has sigfigs anyways. best to only talk of proof when we gots some proofs

Oysters Autobio
Mar 13, 2017
I would really love to hear a good post Cold War analysis of Soviet bureacracy and all the negative aspects that can be attributed to that experiment with structured and centralized control.

Like, are phenomena like nomenklatura systems (i.e. informal patronage networks supercede democratic mechanisms like electoralism or meritocratic mechanisms of HR competition processes) simply inevitable and basically a "cost of doing business" or were there specific aspects of Soviet structures that lead to the rise of bureacratic power?

I know this is a tricky topic to approach "apolitically", but often those criticising any centralization point to these well documented experiences as simply the sort of mechanisms that are inevitable in an overly bureaucratic system. Are there alternative centralized structures that aren't "bureacratic" in nature?

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
large systems of variables constrained by constraints at thermodynamic limit encounter nonequilibrium thermodynamics crap eg., you can't loving throw a rock in em without hittin a power law. cf. montanari-mezard book (https://web.stanford.edu/~montanar/RESEARCH/book.html). this would indicate that this shits universal in the strict physical sense, a conjecture that led pareto to literally become a fascist. however, in actual practical computation there is pretty quotidian countermeasure of random restart (cf. https://www.cs.cornell.edu/gomes/pdf/2000_gomes_jar_phenomena.pdf)

since mononcqc dives into paperland and fishes up a santa fe institute paper like a third of the time, it would be remiss of me not to mention that this is an important main dealio of the sfi

so "large systems of variables constrained by constraints" is an inescapable fact of economics, so nomenklaturas and fuerdai / taizidang appearing is like, an inevitable pollution of such systems but with great and working countermeasures in hard computational satisfiability domain. I tend to believe this is why revolutions tend to have a dodo bird race quality to them

inspection in serious detail reveals a lotta poo poo that would need to be proved for this conjecture to be correct, which is why i'm provin them in the weekends instead of proffering this as a coherent theory of value inequality and class quite yet. it'll get there. ultimately there will also have to be a pretty material empirical component

bob dobbs is dead fucked around with this message at 16:55 on Sep 4, 2023

MononcQc
May 29, 2007

bob dobbs is dead posted:

since mononcqc dives into paperland and fishes up a santa fe institute paper like a third of the time, it would be remiss of me not to mention that this is an important main dealio of the sfi

I've covered 56 papers to date by my tagging (god drat) and I think there's been 1 clearly identified sfi paper (tacit transmission of knowledge) and possibly a few from other authors slipped by, but I think 1/3 would be quite surprising.

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
god drat, i just remembered that one clearly

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

quick maths (structureless, modulo peeps)

MononcQc
May 29, 2007

Oysters Autobio posted:

I would really love to hear a good post Cold War analysis of Soviet bureacracy and all the negative aspects that can be attributed to that experiment with structured and centralized control.

Like, are phenomena like nomenklatura systems (i.e. informal patronage networks supercede democratic mechanisms like electoralism or meritocratic mechanisms of HR competition processes) simply inevitable and basically a "cost of doing business" or were there specific aspects of Soviet structures that lead to the rise of bureacratic power?

I know this is a tricky topic to approach "apolitically", but often those criticising any centralization point to these well documented experiences as simply the sort of mechanisms that are inevitable in an overly bureaucratic system. Are there alternative centralized structures that aren't "bureacratic" in nature?

I have no answer to this, but the sort of recurring theme I see about centrally managed systems is that:
  • there's a lot of richness and variation throughout the system
  • there's an even higher amount of connections and relationships which are really important as well in defining what happens
  • there are limits to how much of that information can be perceived, transmitted and understood, which in turn creates an incentive for simplification that serves people managing the system ("synoptic legibility")
  • there are limits, delays, and variability in how actions can be transmitted through the system (and the reinterpretation of orders and procedures in context varies further)
  • these implicitly end up creating limits to how well control can be applied by a central system and to its ability to understand and manage itself
  • the control structure ideally has to account for itself and how it is understood by other units in the system and how its own acts and presence will impact the system
This last point was one of the killer things that made it hard for cybernetics to work: the designer is inside the system and everything they do and all the information they acquire feed back into it. There is no easy "outside" point of view, and any map one draws that's detailed enough has to contain itself.

If you want centralized structures that aren't bureaucratic, I think they'd be expected to have severe size or complexity limits, or to operate in particularly stable environments.

That being said, there isn't strict literature I've seen on "here are central management systems that work great and scale up more than we thought"—it feels like we're more or less on a continued timeline of having believed that to be possible and social scientists and others constantly finding out how that idea was inaccurate.

bob dobbs is dead posted:

god drat, i just remembered that one clearly

A coworker has thrown this one my way: https://www.sciencedirect.com/science/article/pii/S1090513823000557 from sfi, I want to read it at some point, though I'm always a bit doubtful about papers that seem to be gunning for the formula of life. There's cool insights in them about what sort of behaviours may happen but sometimes it's hard for me to know how serious they are about the applicability as opposed to just developing a nifty idea for the sake of it.

MononcQc fucked around with this message at 20:12 on Sep 4, 2023

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
i got the martians quote from jeff shrager, who was herb simons last grad student and who wrote me one of my rec letters for grad school, btw

Armitag3
Mar 15, 2020

Forget it Jake, it's cybertown.


bob dobbs is dead posted:

i got the martians quote from jeff shrager, who was herb simons last grad student and who wrote me one of my rec letters for grad school, btw

shrager was right?

MononcQc
May 29, 2007

This week I read David Woods' The Strategic Agility Gap: How Organizations Are Slow and Stale to Adapt in Turbulent Worlds, an open access chapter that sort of surveys and puts together a lot of the concepts he has written about in the past, particularly around the need of organizations to balance growth in capabilities with the ability to adjust to the changes they enable.

The idea here is that growths in capability—often due to better technology—brings rapid changes at a societal level: new opportunities are found, complexity grows, and new threats emerge. New capabilities generally mean growth, expansion, bigger scales, and more interactions, which means more surprises. On the other hand, organizations are generally slow and stale when it comes to adapting to these threats or to seize these opportunities:

quote:

As capability grows to improve performance on some criteria, interdependencies become more extensive and produce surprising anomalies as the systems also become more brittle.

The strategic agility gap is the difference between the rate at which an organization adapts to change and the rise of new unexpected challenges at a larger industry/society scale. It is a mismatch in velocities of change and velocities of adaptation.

This figure is attached:



Because the risks are difficult to see ahead, and that the growth is continuous, there's a risk of cascade to disturbances and challenges; this requires anticipating challenges and building a "readiness-to-response" to avoid having to generate and deploy them while the challenge is taking place. Here the text seems to intent something different from just having a plan for specific challenges; the words used are "organizations need to coordinate and synchronize activities over changing tempos, otherwise decisions will be slow and stale". This hints at overall response patterns and reorganization more than having a runbook with specific scenarios.

To provide an example of a failing and a successful case, Woods covers the Knight Capital Collapse from 2012 (other great link) and of a transport company dealing with Hurricane Sandy (illegal source).

In the case of Knight Capital, they rolled out code that reused an old feature flag that had been repurposed, and the deployment failed on a single out of eight servers. When it went live, it produced unexpected behavior that ran more transactions than expected; rolling it back produced even more anomalous behavior due to the flag. People involved struggled to understand the issue. Woods mentions that it took a while before upper management was informed and then authorized to stop trading. By then, it had been less than an hour, but it was too late and the company went bankrupt from their now untenable market position.

The author picked it as an example that shows that:
  1. small problems interact and can escalate quickly
  2. as effects cascade, roles struggle to understand the situation and figure out how to react
  3. non-routine responses are more difficult to get authorization for
  4. this requires more coordination which slows things down while effects still amplify
  5. response can't keep pace with events, particularly when communications are serialized vertically through the organization

The comparative case of a large transportation firm that reconfigured itself during hurricane Sandy has the following elements named behind their effective adaptation. Quoted literally from the text, they:

  • re-prioritized over multiple conflicting goals,
  • sacrificed cost control processes in the face of safety risks,
  • valued timely responsive decisions and actions,
  • coordinated horizontally across functions to reduce the risk of missing critical information or side effects when replanning under time pressure,
  • controlled the cost of coordination to avoid overloading already busy people and communication channels,
  • pushed initiative and authority down to the lowest unit of action in the situation to increase the readiness to respond when unanticipated challenges arose.

This, Woods mention, helped balance what is called the efficiency-thoroughness trade-off. Also noted ETTO, this is a principle that states that needs for safety tend to reduce efficiency, and demands for productivity tend to reduce thoroughness. All of these are because people are limited on time and these two values are in tension. Specifically, they sacrificed economics and standard processes to keep up with events, by using patterns that existed within the organization already given adapting to surprises was a normal experience.

In comparing both cases, the author mentions that following plan is not enough in these situations. There's a need for anticipation and initiative, particularly when events challenge existing plans. The difference between both organizations is that for the transportation company:

quote:

From facing surprises in the past, the varying roles/levels had opportunities to exercise their coordinative ‘muscles,’ even though this specific event presented unique difficulties. In the strategic agility gap, the challenge for organizations is to develop new forms of coordination across functional, spatial, and temporal scales—otherwise organizations will be slow, stale and fragmented as they inevitably confront surprising challenges.

While I personally feel the time scales between cases are very different for the comparison, they probably do a decent job of demonstrating the types of behaviors on each side of the accelerated trajectory line.

The paper shifts toward a "Systems are messy" section, recalling the wold WWII term SNAFU, standing for "Situation Normal: All hosed Up". Standard plans inevitably break down, and some people in some roles do "SNAFU catching", often in hard to detect manners:

quote:

all organizations are adaptive systems, consist of a network of adaptive systems, and exist in a web of adaptive systems—i.e., the resilience engineering paradigm. All human adaptive systems make trade-offs to cope with finite resource and all live in a changing world. The pace of change is accelerated by past successes, as growth stimulates more adaptation by more players in a more interconnected system.

The point here is that operating within the strategic agility gap is unavoidable. Organizations love to rationalize this away:
  • Since SNAFUs occur rarely, this is a low priority issue
  • There's a record of improvement that reduces the challenge SNAFUs represent
  • Poor response when SNAFUs occur is due to people who fail to follow the plan and design

Woods states directly that these rationalizations are wrong empirically, technically, and theoretically. When framing surprises as deviations from the established plan, the compliance pressure that follows undermines the system's adaptive capacities. The background of improvements and a sudden collapse surprises and confuses people within the system. The argument here is that this is normal: as scale and interdependencies increase, performance increases, but so does the proportion of large collapses and failures.

The Resilience Engineering statement here is that what we shouldn't be surprised by the failures, but by how few of them we have. One of Woods favorite laws is the fluency law, which states:

quote:

well adapted activity occurs with a facility that belies the difficulty of the demands resolved and the dilemmas balanced.

The reason we see so few failures is that adapting to SNAFUs continually takes place, and that it is nearly invisible. It is, in fact, one of the tenets of resilience engineering.

Past successes in these situations drive effective leaders to take advantage of improvements and drive the systems to do even more, and this creates adaptive cycles which accelerate the strategic gap. Organizations end up living in that strategic agility gap, and to thrive in there they need to develop and sustain the ability to continuously adapt.

Resilience Engineering researchers turn to web operations in order to study this: outages and near-misses are incredibly common even in the best organizations, and things change so fast that they provide a great laboratory to study constraints and shifting opportunities and risks. The key ingredients identified are:
  • anticipation: seeing signs of trouble and starting adaptation before it becomes definitive
  • contingent synchronization: based on pacing, roles at different levels coordinate differently
  • readiness to respond: developing and mobilizing response capability before surprises
  • proactive learning: studying how surprises are caught and resolved before major collapses or accidents

To express and apply initiatives, there's a need to push it down closer to action; this can be miscalibrated in a way that fragments efforts and makes units work at cross-purposes. Since we can't just enforce plans harder, resilience engineering seeks system architectures that can adjust the expression of initiative as the potential for surprises varies. This requires to prioritize and sacrifice some goals as conflicts arise. Proactive learning is key there—and not just learning from events that cause economic loss or cause harm after they have happened.

There's also a good call for reciprocity, which I'll use as the author's closing words:

quote:

Effective organizations living in the gap build reciprocity across roles and levels. Reciprocity in collaborative work is commitment to mutual assistance. With reciprocity, one unit donates from their limited resources now to help another in their role, so both achieve benefits for overarching goals, and trusts that when the roles are reversed, the other unit will come to its aid.
[...]
Units can ignore other interdependent roles and focus their resources on meeting just the performance standards set for their role alone. Pressures for compliance undermine the willingness to reach across roles and coordinate when anomalies and surprises occur. This increases brittleness and undermines coordinated activity. Reciprocity overcomes this tendency to act selfishly and narrowly. Interdependent units in a network should show a willingness to invest energy to accommodate other units, specifically when the other units’ performance is at risk.
[...]
Episodes of surprise provide the opportunity to see when and how people re-prioritize across multiple goals when operating in the midst of uncertainties, changing tempos and pressures.

MononcQc
May 29, 2007

This week, I wanted to cover a collective paper written by a group of researchers and artists collaborating together, titled AI Art and its Impact on Artists.

The paper starts with an overview of how image generation works at a high level, starting with Convolutional Neural Networks (CNNs) doing image recognition, variational autoencoders (VAE) which used mirrored neural networks to enable the first generative models (such as VQ-VAE-2), followed by generative adversarial network (GAN) where two competing neural networks try to please a discriminator (often a third network evaluating how realistic an image is). This later type of tech eventually got augmented with an ability to consider tags defining the data, and was used for images as large as 512x512. Natural Language Processing (NLP) allowed increasing the complexity of tests and generated images, and the inclusion of Large Language Models (LLMs) led to natural language prompts.

Eventually (in the last few years), diffusion models inspired by fluid dynamics—they apply noise to an image and then de-noise the results—led to models not constrained by natural language understanding. This lands us close to where we are with Stable Diffusion, DALL-E, Midjourney, and others. Models of these types are trained on large image datasets such as JFT-300M or LAION (which has sub-variants), which contain hundreds of millions to billions of image-text pairs. In total, the paper lists roughly 20 commercial products using various datasets.

The authors point out that there's a tendency to anthropomorphize image generators, of talking about them like they're artists, even going as far as saying they are "inspired" by the data in their training set. The authors disagree, and present us with some philosophy of art to support their point, defining art as a uniquely human endeavor connected specifically to human culture and experience:

quote:

[W]hile non-human entities can have aesthetic experiences and express affect, a work of art is a cultural product that uses the resources of a culture to embody that experience in a form that all who stand before it can see. [...] Further, this process must be controlled by a sensitivity to the attitude of the perceiver insofar as the product is intended to be enjoyed by an audience. [...] This control over the process of production is what marks the unique contribution of humanity: while art is grounded in the very activities of living, it is the human recognition of cause and effect that transforms activities once performed under organic pressures into activities done for the sake of eliciting some response from a viewer.

As an example, they mention a robin singing or a peacock dancing under organic pressures, but human song and dance serving purposes different from organic ones, including cultural ones and communication. Image generators however do not have that understanding of the perspective of the audience, and do not undergo a similar artistic process. Instead, they imitate whichever parts of the process are embodied in the works within the training set—works from image generators may be aesthetic, but not artistic: true artistic works generally require to also be aesthetic, but this latter point is mostly limited to technique, which isn't sufficient to be truly artistic.

This plays out in how image generators can give good results, but to do so require extensive training to be shown what the "right" output should be, whereas humans do not require such criteria. This makes image generators great at copying style, but, the authors say, it is very rare for artists to be able to copy each other's styles:

quote:

The very few artists who are able to do this copying are known for this skill. An artists’ ‘personal style’ is like their handwriting, authentic to them, and they develop this style (their personal voice and unique visual language) over years and through their lived experiences.

The adoption of any particular style of art, personal or otherwise, is a result of the ways in which the individual is in transaction with their cultural environment such that they take up the customs, beliefs, meanings, and habits, including those habits of aesthetic production, supplied by the larger culture. As philosopher John Dewey argues, an artistic style is developed through interaction with a cultural environment rather than bare mimicry or extrapolation from direct examples supplied by a data set.

In short, the development of an artist's style comes from repeated interactions with their environment and culture, and there's a cycle of influence and impact shaping it. It is unique to each of them and does not come in isolation, but from active participation and growth in a way that is constantly evolving. By comparison, image generators, once trained, stop changing until they are explicitly trained again, either from scratch or fine-tuning. The abstract interpretations and sentimental imagery are missing, the paper argues.

quote:

image generators are not artists: they require human aims and purposes to direct their “production” or “reproduction,” and it is these human aims and purposes that shape the directions to which their outputs are produced. However, many people describe image generators as if these artifacts themselves are artists, which devalues artists’ works, robs them of credit and compensation, and ascribes accountability to the image generators rather than holding the entities that create them accountable.

This is why we need to be really careful about the words we choose to describe image generators. Anthropomorphisation shifts accountability and credit in a distinct way between the automation, the stakeholders who produce and train them, and the artists whose output is used to train them.


Impact on Artists

The paper at this point shifts in covering the impact of AI art on artists, under many lenses:
  1. Economic loss
  2. Digital artwork forgery
  3. Hegemonic views and stereotyping
  4. Effects on cultural production and consumption

For economic loss, the argument is that an artist's style is formed over years of honing their craft through practice, observation, schooling, and costs of materials (books, supplies, tutorials). Their output is then used without compensation by companies like Stability AI—companies backed by billions from venture capitalists—who then compete with them directly in the market. Folks like Sam Altman of OpenAI specifically call out the expectation to replace creatives' jobs; Stability AI CEO Emad Mosque has accused artists of wanting to have a “monopoly on visual communications” and “skill segregation”. The paper retorts:

quote:

To the contrary, current image generation business models like those of Midjourney, Open AI and Stability AI, stand to centralize power in the hands of a few corporations located in Western nations, while disenfranchising artists around the world.

The behavior observed is that image generators can output content much faster and cheaper, but without nearly as much depth of expression as a human. They allow flooding the market with "acceptable" imagery that will supplant demand for artists. The paper then covers multiple examples of this happening already in TV series, movies, and gaming industries.

While this hurts fully employed artists, they point out that self-employed artists are also likely to suffer. They point out the example of the Clarkesworld science fiction magazine, which got flooded so much by AI-generated sci-fi that they had to stop accepting all submissions, and eventually re-opened them while only accepting submissions from previously published authors. The net impact, they say, is that rather than democratizing art, the number of artists who can share their work and receive recognition is reduced.

Many artists already have to use image generators in order to keep their jobs, and report having their role slowly shifting to "clean up work, with no agency for creative decisions". Basically, if they want to keep working, they have to make the output of image generator good enough, which reinforces the pattern that de-skills their work. Actual artwork allowing full creative control is increasingly likely to only be affordable to people who are already independently wealthy, and to stall development of artists from other backgrounds.

In terms of digital artwork forgery, the lack of consent and attribution also is problematic. Copyrighted images and photographs are used to train image generators, which often produce near-exact replicas. While artists have increasing trouble living from their art, some companies directly market their ability to replicate their style. Often, the artists' name is associated (because it's their style) by people who asked for the images to be generated, and their reputation slowly gets tied to images they wouldn't have agreed to produce.

In some cases, they are used in harsher situations such as harassment, hate speech, or genocide denial. This existed before image generators but is faster now. Artist Sarah Andersen states:

quote:

"Through the bombardment of my social media with these images, the alt-right created a shadow version of me, a version that advocated neo-Nazi ideology... I received outraged messages and had to contact my publisher to make my stance against this ultraclear.” She underscores how this issue is exacerbated by the advent of image generators, writing "The notion that someone could type my name into a generator and produce an image in my style immediately disturbed me... I felt violated”
Since the artists' style is a product of their own growth and history, this becomes far more personal than people realize.

Going to hegemonic views and stereotyping, the authors report that underrepresented groups, those more used to being more invisible, can attest to seeing a distortion of themselves in the output of image generators, often warping reality based on stereotypes:

quote:

For instance, [Senegalese artist Linda Dounia Rebeiz] notes that the images generated by Dall-E 2 pertaining to her hometown Dakar were wildly inaccurate, depicting ruins and desert instead of a growing coastal city.
As a personal note, I saw an article just yesterday on how challenging it is to ask for generated images of black doctors helping white children, and it similarly reflects how dominant views and media shape the output.

The objectification of some cultures goes further, where "synthetic models" are generated and licensed to organizations, and the benefits go to the people who generate the images rather than people from the cultures off which they are based. Once again, this brings back the question about where credit, attribution, and accountability ends up being distributed.

This is where chilling effects on cultural production and consumption come in play. Since many artists already struggle to make ends meet and that the job prospects are rapidly worsening, students are dissuaded from honing their crafts, and both new and current artists are more reluctant to share their work to protect themselves from mass scraping. This causes tension, because they often build their audience and visibility by sharing content on social media, crowdfunding platforms, and trade shows, but are now incentivised against doing that to protect themselves from the unethical practices of corporations profiting from their work:

quote:

Artists’ reluctance to share their work and teach others also reduces the ability of prospective artists to learn from experienced ones, limiting the creativity of humans as a whole. Similar to the feedback loop created by next generations of large language models trained on the outputs of previous ones, if we, as humanity, rely solely on AI-generated works to provide us with the media we consume, the words we read, the art we see, we would be heading towards an ouroboros where nothing new is truly created, a stale perpetuation of the past.

What the authors are warning against is a potential feedback loop by which art stops progressing and becomes stale.

AI Art, US copyright law, and Ethics

The paper uses words such as unethical when describing image generators, and this section mostly gives weight to that element. Currently, it isn't exactly clear whether the way image models are trained represents copyright infringement. Class action lawsuits are kicking off, and the scales in play here in terms of the number of artists involved is somewhat unprecedented.

What the authors assert here is that these unanswered legal questions about whether copyright applies are used by the companies producing image generators to operate without accountability, so long as they aren't being sued for specific violations of existing copyright law. Since courts take time to work, economic and social harms to artists are allowed to go on.

In terms of authorship, for example, the generated images are not copyrightable under US law, although the prompts used might be copyrightable if they are independently creative. So the iterative work that requires continuous transformations is somewhat hard to define copyright-wise. The way artists interact with the tools may end up defining the status, and given uncertainty here, they call for more caution.

One of the major arguments used by the producers of image generators is the concept of fair use:

quote:

Fair use is a doctrine in copyright law that permits the unauthorized or unlicensed use of copyrighted works; whether it is to make copies, to distribute, or to create derivative works. Whether something constitutes fair use is determined on a case-by-case basis and the analysis is structured around four factors.
Of the four factors, two are most relevant:
  • whether the use is commercial and “transformative”; transformative use may be valid for commercial reasons, but not always.
  • whether a use is a threat to the market of the original creator’s work.
So while arguments can often be made that the image generators make transformative work, the fact that it often copies the style of smaller independent artists who can't necessarily afford to fight legal battles about copyright (unlike Getty images), means that the fair use argument may fall apart given how image generators often end up threatening the market for the original creator. This is without counting moral rights, which protect reputational interests.

The authors call out "data laundering" practices that roughly work as follows:
  • LAION is established as a nonprofit organization
  • LAION releases the LAION-5B dataset, containing 5 billions of image-text pairs, many of which contain copyrighted material
  • Hugging Face and Stability AI are declared as sponsors of the above dataset and models
  • LAION claims the dataset is for research purposes, which makes the dataset more likely to be fair use, since nonprofit educational and noncommercial uses are fair
  • Hugging Face and Stability AI use the "fair use" datasets for commercial purposes
  • Stability AI raises $101M in funding with a $1B valuation
  • The accountability for the dataset creation and maintenance, including copyright or privacy issues, is shifted to the nonprofit that collected it
  • Artists get no credit nor compensation
As such, the cycle is generally that by having universities and research labs funded by private corporations, they practically end up bypassing copyright claims for commercial uses. Investors and corporations are free to do whatever with limited responsibility. This is more or less a direct call-out from the authors to the ML and AI communities to figure out their ethics and take responsibility to protect people.

Most existing or suggested mechanisms to protect artists (eg. watermarking) either don't work, or put the responsibility on artists to prove harm for any action to be taken. The paper calls for better accountability of the entities who create image generators in the first place, rather than on the artists. They advocate for legislation that prevents training models without artists' consent, funding AI research that isn't tangled with corporate interests, and to evaluate and task work based on how they can serve specific communities. This, however, would require shifting ML researchers' point of view to be aware of their relationship to power, rather than assuming their technology is neutral and usage isn't their responsibility.

The authors conclude:

quote:

Image generators can still be a medium of artistic expression when their training data is not created from artists’ unpaid labor, their proliferation is not meant to supplant humans, and when the speed of content creation is not what is prioritized. [...] If we orient the goal of image generation tools to enhance human creativity rather than attempt to supplant it, we can have works of art [...] that explore its use as a new medium, and not those that appropriate artists’ work without their consent or compensation.

MononcQc fucked around with this message at 04:38 on Oct 8, 2023

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
neural nets are economic machines inasmuch as they're optimizational things first and foremost, the generation or whatever is just poo poo stuck on top of backprop. so to say that they have economic impacts in IP, job whatever, whatever yadda yadda is myopic in nature: they pose economic questions directly.

to the nn toucher, intermediate representations (them bein little shits) and numerics are the two basic serious problems in modelling and backprop solves the former - which is why it hasn't been replaced by anything else over 50 years - and the long struggle has basically been the second (the mass data collection and fuckin with models is the maniacal solution to the second). economists never give one gently caress about either one. coase's theory of the firm talks about coordination costs in a way that you can tell coase doesn't respect or think about intermediate distributed representation for example, and economists talk about dynamic equilibrium as if that wasn't an oxymoron. so i dont think its just questions about intellectual property but property at all.

it is worth noting that the list of steps for laundering the data yadda yadda poo poo is itself an instance of complaining about economic intermediate representations bein poo poo

bob dobbs is dead fucked around with this message at 10:34 on Oct 8, 2023

MononcQc
May 29, 2007

bob dobbs is dead posted:

neural nets are economic machines inasmuch as they're optimizational things first and foremost, the generation or whatever is just poo poo stuck on top of backprop. so to say that they have economic impacts in IP, job whatever, whatever yadda yadda is myopic in nature: they pose economic questions directly.

to the nn toucher, intermediate representations (them bein little shits) and numerics are the two basic serious problems in modelling and backprop solves the former - which is why it hasn't been replaced by anything else over 50 years - and the long struggle has basically been the second (the mass data collection and fuckin with models is the maniacal solution to the second). economists never give one gently caress about either one. coase's theory of the firm talks about coordination costs in a way that you can tell coase doesn't respect or think about intermediate distributed representation for example, and economists talk about dynamic equilibrium as if that wasn't an oxymoron. so i dont think its just questions about intellectual property but property at all.

it is worth noting that the list of steps for laundering the data yadda yadda poo poo is itself an instance of complaining about economic intermediate representations bein poo poo

That's a fair enough point, though I'm not fully sure the connection between IP, property, and direct monetary value is absolute, even if it seems pretty drat solid.

I have to say I sometimes have a bit of a tough time reading you or your stance. Combined with the repo you linked here before, would you generally say you're the type of person to have that sort of high-level model or framework to think about things that often harks back to some variant of a constraint propagation problem or some literal matrix representation and everything somehow connects to such a perspective. Is this any close to right?

I mean I have my own fuzzy ways to think about things that are really hard to put into words so I know that's hell of a tricky question to ask, but I've been wondering.

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
theories of value are theories of how prices arise. obviously prices are only meaningful in ensemble (sets of prices are the thing) and they renormalize (if every single thing has a zero added including wages, banknotes, accounts, etc thats a noop), and this and like 7 other reasons are why i think they are a distributed representation. closest strictly speaking field is mechanism design but i dont care about markets, i care about firms because firms are more hosed up

neural net studying is just studying distributed representations, only you can do simulations without your pants at 2am in 20 seconds to 20 hours and are expected to do so, as opposed to economics where experiments take like, 20 years and 50,000 dead, so we just know a lot more about distributed representations and how to get em to optimize in neural land

rumelhart didnt see a material difference between the ordinary csp regime and backprop poo poo - i tend to agree w him, but that sort of csp peep hates the neural peep for academia reasons

bob dobbs is dead fucked around with this message at 20:14 on Oct 12, 2023

VikingofRock
Aug 24, 2008




Newest MononcQc post is a banger: You want my password or a dead patient

This summary is full of some pretty ingenious ways that people in the medical field get around the restrictions of the EHR systems. The article hit pretty close to home for me, because my wife is an optometrist, and she will frequently talk about how janky EHR systems add an hour or two to her work every day. EHR systems should have made doctors' lives significantly easier, but clearly that didn't happen, so it is fascinating to see a detailed breakdown of all the ways that EHR fails medical professionals.

MononcQc
May 29, 2007

VikingofRock posted:

Newest MononcQc post is a banger: You want my password or a dead patient

This summary is full of some pretty ingenious ways that people in the medical field get around the restrictions of the EHR systems. The article hit pretty close to home for me, because my wife is an optometrist, and she will frequently talk about how janky EHR systems add an hour or two to her work every day. EHR systems should have made doctors' lives significantly easier, but clearly that didn't happen, so it is fascinating to see a detailed breakdown of all the ways that EHR fails medical professionals.

Unfortunately, I had posted it in this thread first—I’ve just been slowly converting posts here into blog posts over time because I find myself wanting to share them.

I took a big break from paper reading lately to work on my toy projects, close the yard, write a bit (in French), and read books.

I’ve read an interesting paper this week from Nora Bateson that was about to annotate, but it’s a bit of a challenging one and it’s more of an essay than a paper, but maybe I’ll have the time to cover it next week or something.

Sagebrush
Feb 26, 2012

Can't wait to hear shaggar's opinion on how things like these

quote:

One example given is that one Electronic Health Record (EHR) system forces clinicians to prescribe blood thinners to patient meeting given criteria before they can end their session, even if the patient is already on blood thinners. So clinicians have to do a risky workaround where they order a second dose of blood thinners to log out (which is lethal if the patient gets it), quit the system, then log back in to cancel the second dose.

quote:

There's also a case where a doctor couldn't find the required medication in the software. He found a custom field with free text where he noted the prescription, but the box was not visible on the other end so the prescription was never given and the patient lost half his stomach.

are actually the system working as intended and/or the doctor's fault

MononcQc
May 29, 2007

Sagebrush posted:

Can't wait to hear shaggar's opinion on how things like these

are actually the system working as intended and/or the doctor's fault

aside from admitting EHRs are lovely, he called them whiny babies and also blamed administrators:

Shaggar posted:

EMRs are all really loving bad for sure, but then you throw on top of that how lovely hospital administrators are and how doctors are all whiney babies, and theres basically no system they wont gently caress up.

Shaggar posted:

reasonable person: "im gonna solve authentication by giving these doctors prox cards!"
EMR: "we dont support that"
administrator: "we dont want to pay for it"
doctor: "i left my prox card at home, give me yours"

Shaggar
Apr 26, 2006

Sagebrush posted:

Can't wait to hear shaggar's opinion on how things like these



are actually the system working as intended and/or the doctor's fault

The first one is bad design created by a group of doctors and the second is a doctor avoiding using the system because they are lazy and/or doing something wrong and/or ran into another bad design by the aforementioned doctors.

I think what alot of people dont understand about EHRs is that they're hand built/configured based on the needs of doctors and patient care isnt really a consideration.

Specifically with the first record it probably went down something like:
"hey i always forget to enter in the script after i talk to the patient about it. Can you have the system remind me to do it?"
"Sure we can provide a warning for any closed records where the patient is missing a treatment plan for a captured condition"
months pass
"Hey im still forgetting to enter the script because i dont look at the warnings, can you make it force me to enter it in?"
"We can but its going to impact everyone, not just you"
"Thats fine"

WRT the actual problem its either that the implementation was naïve, i.e. "if patient condition needs blood thinners then require new blood thinner prescription. " or the implementation is good and checks for things like an existing blood thinner prescription and the existing prescription is not properly coded and the system cant tell its for blood thinners. This could be: Doctor is using the notes to store prescriptions and nurses know to look there. Doc entered a manual prescription because he couldnt find it in the list so there are no codes. Existing prescription has off-label blood thinning effects either not coded (as the FDA doesnt recognize them) or the implementation checking for contraindications didnt take coded side effects into account. etc.. etc...

The case where you cant find the medicine is usually a bad UI that is compounded by docs not wanting to deal with even the slightest bit of resistance. Like if they do a search for the medicine and misspell it and your search doesnt handle the misspellings they're gonna blame the system, say they cant find it, and stick it in the notes (probably also misspelled).

EHRs are extremely bad for the most part, but thats because they're designed by and for what doctors think they need instead of what they actually need. EPIC's whole thing is that they will build you what ever custom piece of poo poo you want.

Its an entire industry making the most expensive homer cars to order.

MononcQc
May 29, 2007

I received comments on that post on cohost from a bunch of people, some of which were interesting:

quote:

Just about twenty years ago, I worked for a division of a no-longer-extant conglomerate that produced software for hospitals, warehouse and store rooms, in our particular office's case. It remains one of the few jobs where I had to raise my voice regularly, because every decision involved someone saying "well, how can we know how the end users would use this," and me gesturing emphatically out the conference room window at the hospital across the street. I wanted to walk over and ask people for help, but they hated that idea, in favor of working from even-then-outdated advice like the password expiration nonsense mentioned early here...

quote:

I was a nurse at a healthcare company that had a group of clinicians come into the IT department to "give feedback on the software"--something we were all very eager to do. But then we spent the entire time doing basic testing like "when you click on the start button, does the program start?" and I never did figure out why they needed nurses for that.

In retrospect, maybe what really happened was that this was some kind of bizarre compromise between "talk to end users" and "end users don't pay our rent" internal factions. Fine, we'll involve end users, but for god's sake don't ask their opinions.

I'm sure it can manage to be a mix of both over-indexing on the needs of some users and generally not getting the feedback or any observational data from broader users either.

Shaggar
Apr 26, 2006
yeah you absolutely cannot ask operations what they want because they will waffle back and forth between asking you to replicate the same bad process that they use today or a totally new process that doesnt do what they need. Its classic homer car.

but you also cant ignore operations because they're the ones doing the work.

The best solution is getting to the heart of what they're trying to accomplish rather than what features they would like that they think will accomplish that task. i.e. You want to ask "What are you trying to bake?" not "Would you like a feature where you can change the frosting flavor?". If they're baking bread and not cakes that feature doesnt help them and you're relying on them to understand that frosting doesnt go on bread. You might think they should know this because they're bakers, but they will still demand the feature. This could be because they think you're the expert (you are making the software after all and management says we need the software), or they dont actually know what baking is, or they're a psycho who will slather frosting all over the bread if you let them. And then you probably have some british guy who refers to cake as sugary bread or something so he thinks its totally fine.

That kind of good requirements gathering of course never happens, so your actual best bet is finding 1 or 2 people from the operations group who actually know what they're doing and getting it from them even if its a back channel thing.

Also, telemetry. I loving love telemetry. Just the number of bugs its let me fixed alone is wonderful, but you can see what the users are actually doing vs what they say they're doing and its so incredibly valuable.

Shame Boy
Mar 2, 2010

i like that they tricked those nurses into doing free QA testing, that's a good one

Sagebrush
Feb 26, 2012

Shaggar posted:

The best solution is getting to the heart of what they're trying to accomplish rather than what features they would like that they think will accomplish that task. i.e. You want to ask "What are you trying to bake?" not "Would you like a feature where you can change the frosting flavor?".

in industrial design the expression for this is "people want toast, not toasters"

PokeJoe
Aug 24, 2004

hail cgatan


I want a toaster that makes toast from bread

PokeJoe
Aug 24, 2004

hail cgatan


I'm sure someone has tried to smell pre toasted bread and it was Bad

PokeJoe
Aug 24, 2004

hail cgatan


Foods are special in that if they were radioactive their half life would kill everyone

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost

PokeJoe posted:

I'm sure someone has tried to smell pre toasted bread and it was Bad

thats rusks. you can get em for babies. also hardtack, which is still popular in alaska, japan and korea

Adbot
ADBOT LOVES YOU

PokeJoe
Aug 24, 2004

hail cgatan


that's not the same thing. toast is actually good

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply