|
ADINSX posted:This is a good way of explaining it and I think another thing to point out that actually makes it intuitive to me is that: if you always change your answer after the door reveal the only way you could lose is if you picked correctly on the first guess. The odds of that happening are 1/3 so your odds of winning with this approach are 2/3 reminds me of the very counterintuitive thing where if you have a group of people and how many do you need on average before the same birthday crops up twice and it’s far less than you’d imagine
|
# ? Aug 21, 2023 21:23 |
|
|
# ? May 3, 2024 07:57 |
|
birthday paradox is always straightforward if you know landau notation. possible birthday matches grow as O(n^2), num peeps grows O(n)
|
# ? Aug 21, 2023 21:48 |
|
bob dobbs is dead posted:birthday paradox is always straightforward if you know landau notation. possible birthday matches grow as O(n^2), num peeps grows O(n) load bearing if
|
# ? Aug 21, 2023 22:39 |
|
a true iff, with that second f bein a thing
|
# ? Aug 21, 2023 22:45 |
|
bob dobbs is dead posted:birthday paradox is always straightforward if you know landau notation. possible birthday matches grow as O(n^2), num peeps grows O(n) you don't even have to know the notation to understand it.
|
# ? Aug 23, 2023 06:51 |
|
someone wanted to see mertonon in this thread iirc. i cut a pre-alpha release https://github.com/howonlee/mertonon
|
# ? Aug 24, 2023 17:52 |
|
pro-click but mostly just for the readme
|
# ? Aug 24, 2023 18:54 |
|
if you liked the readme you'll like the q and a sections starting with "is this gonna lead to a grey dystopian hellhole" and "is this gonna automate me out of a job" also please post it in postingy places if you're feelin it
|
# ? Aug 24, 2023 18:57 |
|
Shame Boy posted:pro-click but mostly just for the readme
|
# ? Aug 25, 2023 13:09 |
|
bob dobbs is dead posted:someone wanted to see mertonon in this thread iirc. i cut a pre-alpha release this is really cool! It's really giving me vibes of what the People's Republic of Walmart was describing, though that could just because maybe that book used one of the same quotes as you did about Martians. Not sure. My only specific advice is related to docs/usage.md Your scenario is a little too abstracted, specifically when you're walking through explaining the creation of nodes, you're using examples which are so abstracted from any domain that it's tough as a non expert to sort of grok an idea of how I would use this for my own problems. You stated earlier that your goal was to provide a system that didn't require operations research specialists to run, so for your usage readme it would be far better to choose an internally consistent domain application of nodes rather than broad concepts of "cost centres" and such. As a non expert, I'll have a far easier time accepting the analogical limitations when translating across domains to grok the idea and features of the app without needing to wade through abstraction. I get that you might not want to dissuade users by giving too domain-specific of an example, but because this is already such an abstract endeavor it might be worth replacing the "Mimblzopper Cost Centres" with like, a logistics company or Uber eats ice cream delivery or whatever you have the most domain knowledge on. Though you maybe risk dissuading people not in that domain, it's far easier for people who see something really interesting in this product translate their use-case if it's from a known alternative domain. Like if I'm interested in using this for say allocating software engineers in a B2B org, I'd still have a far easier time reading an example of "Mimblezop, Rigman & Grugdek Partners" law firm for allocating lawyers to cases / firms or whatever than a complete domain agnostic abstraction detailing responsibility centers and cost centers. Even though the domains are completely different, at least I could grok how the usage would work by making some approximate comparisons between examples. Whereas with abstracted "Mimblezop Cost Centres", counterintuitively I find it hard to go from abstract -> concrete than concrete -> concrete, even if I have to make big leaps and bounds in my analogy mapping. If you whipped up a similar walkthrough but for something domain specific, you could throw that at the top of your readme because it would serve as a far better intro / overview of your app than diving straight into your inspiration and design methodology. Same advice I think for your leading examples and use cases in your intro. It talks about "doohickies" and such would be far better if you rewrote them into a few different concrete domain specific examples. You can later explain if you want and map out how "Ice Cream Sales" was an example of a responsibility center and "Ice Cream ITSEC" was an example of a cost center. People generally are willing to accept that the sort of arbitrary and artificial examples given are very simplified examples that are just scratching the surface of potential usage. Oysters Autobio fucked around with this message at 16:29 on Sep 3, 2023 |
# ? Sep 3, 2023 16:11 |
|
someone put up an issue saying shits too abstracted yeah, i dunno if that was you. im writing a less abstracted one the only domain where i can up and call ... 30ish.... c-levels and leadership and have them kinda listen to me maybe is software saas land, so i guess that'll have to be it
|
# ? Sep 3, 2023 18:21 |
|
This week I decided to read an old classic essay from the 70s, which then gathered hundreds of citations; it's the kind of text whose ideas are so ingrained into modern theory about organizations and political groups that I believe I knew many of them without having read the original. It's therefore a great time to go read the source material, specifically Jo Freeman's Tyranny of Structurelessness. The text was written as a comment on how various feminist groups early on would take a "leaderless" or "structureless" approach, often as a counter to established political and patriarchal groups: the looseness encouraged discussion and participation, but the author asserts that often, little more than insights would come out of these groups. In this essay, she covers what exactly tend to be the problems around structureless groups—which I think is what people quote a lot when referring to the tyranny of structurelessness as a catch-all phrase—and then ways in which groups could more effectively be structured for democratic purposes without necessarily replicating existing structures. This latter part, a much shorter one, seems to be remembered a bit less. But let's start with the problems. The big obvious one is that there is no such thing as an actually structureless organization: any group of people that comes together for any purpose eventually has some structure emerge. Any interaction, conversation, skill, distribution of tasks, or variations in power will end up creating an implicit, possibly flexible structure that may change over time. The issue, then is that a group aiming to be structureless: quote:... does not prevent the formation of informal structures, only formal ones. [...] Similarly "laissez faire" philosophy did not prevent the economically powerful from establishing control over wages, prices, and distribution of goods; it only prevented the government from doing so. Thus structurelessness becomes a way of masking power, and within the women's movement is usually most strongly advocated by those who are the most powerful (whether they are conscious of their power or not). As long as the structure of the group is informal, the rules of how decisions are made are known only to a few and awareness of power is limited to those who know the rules. Basically, the assertion here is that as you eliminate the explicit control structure, an implicit, hidden one still exists, and can still exert itself, except that it is not bound by clear rules, and there are very few ways to know it is even there. An implicit structure therefore may in practice limit open participation, rather than encourage it. Particularly, while formal structures won't destroy the informal ones, it will prevent them from becoming dominant, and will have tools to "attack" them if they aren't acting in the interests of the group at large. The author repeats, however, that most organizations with an explicit structure still contain many implicit structures within themselves. Jo Freeman takes this opportunity to define what Elites are about. Specifically, she points out that elites are not individuals; they are instead small groups wielding power over larger groups they have no responsibility toward, and who often does not have their consent or knowledge to do so. They are not conspiracies, and she describes them as "groups of friends who also happen to participate in the same political activities." Many such groups may exist within a larger one, and so organizations can have multiple elites jockeying for power at the same time. She mentions that elites are often more formed based on background, time, or personality than actual talent, competence, or dedication to the cause. The former is how you make friends whereas the latter is what an organization needs. Once the pattern is established, looking for people who "fit in" when recruiting tends to sustain it. If you're outside the elite, the only way in is to find a "sponsor", become their friend, until they bring you into a sort of inner circle. By comparison, having explicit decision-making processes (requiring some structure) will make it easier for someone to participate if they are outside the elite(s). For what it's worth, she mentions that the elites aren't inherently bad, they're just inevitable. They can be useful and do very useful things as well. In structured organizations, these elites are less likely to govern than they are in unstructured groups. Particularly, since they haven't been put in power by anyone in structureless groups, there's also no one who can take their power away. They may try to act responsibly to keep their influence, but it is at their own will and interests that they may be: the group can't demand it of them. The author also looks at the concept of "stars", volunteers or people who become very popular with the public or the media. She states that this is a sort of natural outcome, because the public expects a spokesperson to represent a group: quote:But because there are no official spokespeople nor any decision-making body that the press can query when it wants to know the movement's position on a subject, these women are perceived as the spokespeople. Thus, whether they want to or not, whether the movement likes it or not, women of public note are put in the role of spokespeople by default. Other problems happen when people get tired of "just talking" and want to turn to political action; many structureless groups are incapable of it. Those that can function often have some very specific properties:
Specifically for this latter case, the older, established elite has few ways of discussing it in the open to prevent it without exposing their own covert structure. Anti-elitism and calls for structurelessness are often the best way for them to go, while also trying to exclude their opponents, possibly by re-defining the existing purpose of the group to align with the existing elite or by re-defining the opponents as bad actors (for example, by Red-baiting). This, basically, means institutionalizing the elite's power structure, which isn't always possible. She adds that the less structured a group is, the less control it has over the directions in which it develops and in the projects it engages in: quote:If the movement continues deliberately to not select who shall exercise power, it does not thereby abolish power. All it does is abdicate the right to demand that those who do exercise power and influence be responsible for it. I should point out that through the text, she mentions that none of these criticisms by any means imply that structured organizations are immune to these problems. They however may have defined means of dealing with them. Jo Freeman then gets into the concepts required to properly structure power, without necessarily replicating existing (often problematic) structures. She mentions a need for continuous experimentation and re-structuring, with various approaches possibly being needed for various situations. The principles are:
On these principles, she concludes: quote:When these principles are applied, they insure that whatever structures are developed by different movement groups will be controlled by and responsible to the group. The group of people in positions of authority will be diffuse, flexible, open, and temporary. They will not be in such an easy position to institutionalize their power because ultimate decisions will be made by the group at large. The group will have the power to determine who shall exercise authority within it. To me, a lot of the comments ring true, and it's obvious that many of the criticisms applied to feminist groups in past decades apply pretty directly to corporate environments as well. I'm not surprised to see a lot of the discourse borrowed elsewhere (nor am I surprised to see some anarchists who dislike the text; others, including communalists seem to take it as evidence of a need for "federation"). I do appreciate the last bits on the theory of how to better apply power, and the call to experiment with structure more actively, rather than trying to throw it away altogether. All in all, and if I can be personal here, I'm a bit relieved because it seems like the paper at least did not preemptively discredit my whole talk on feedback loops and complexity that has a whole bit on "nominal" vs. "emergent" organizational structures and the need to align actions on both levels when trying to enact change. It would have felt a bit embarrassing to give the whole speech in front of crowds to find out it had been proven to be wrong decades before I was born (which, of course, doesn't mean I'm right either).
|
# ? Sep 4, 2023 05:00 |
|
this is social science, nobody has sigfigs anyways. best to only talk of proof when we gots some proofs
|
# ? Sep 4, 2023 05:28 |
|
I would really love to hear a good post Cold War analysis of Soviet bureacracy and all the negative aspects that can be attributed to that experiment with structured and centralized control. Like, are phenomena like nomenklatura systems (i.e. informal patronage networks supercede democratic mechanisms like electoralism or meritocratic mechanisms of HR competition processes) simply inevitable and basically a "cost of doing business" or were there specific aspects of Soviet structures that lead to the rise of bureacratic power? I know this is a tricky topic to approach "apolitically", but often those criticising any centralization point to these well documented experiences as simply the sort of mechanisms that are inevitable in an overly bureaucratic system. Are there alternative centralized structures that aren't "bureacratic" in nature?
|
# ? Sep 4, 2023 16:27 |
|
large systems of variables constrained by constraints at thermodynamic limit encounter nonequilibrium thermodynamics crap eg., you can't loving throw a rock in em without hittin a power law. cf. montanari-mezard book (https://web.stanford.edu/~montanar/RESEARCH/book.html). this would indicate that this shits universal in the strict physical sense, a conjecture that led pareto to literally become a fascist. however, in actual practical computation there is pretty quotidian countermeasure of random restart (cf. https://www.cs.cornell.edu/gomes/pdf/2000_gomes_jar_phenomena.pdf) since mononcqc dives into paperland and fishes up a santa fe institute paper like a third of the time, it would be remiss of me not to mention that this is an important main dealio of the sfi so "large systems of variables constrained by constraints" is an inescapable fact of economics, so nomenklaturas and fuerdai / taizidang appearing is like, an inevitable pollution of such systems but with great and working countermeasures in hard computational satisfiability domain. I tend to believe this is why revolutions tend to have a dodo bird race quality to them inspection in serious detail reveals a lotta poo poo that would need to be proved for this conjecture to be correct, which is why i'm provin them in the weekends instead of proffering this as a coherent theory of value inequality and class quite yet. it'll get there. ultimately there will also have to be a pretty material empirical component bob dobbs is dead fucked around with this message at 16:55 on Sep 4, 2023 |
# ? Sep 4, 2023 16:43 |
|
bob dobbs is dead posted:since mononcqc dives into paperland and fishes up a santa fe institute paper like a third of the time, it would be remiss of me not to mention that this is an important main dealio of the sfi I've covered 56 papers to date by my tagging (god drat) and I think there's been 1 clearly identified sfi paper (tacit transmission of knowledge) and possibly a few from other authors slipped by, but I think 1/3 would be quite surprising.
|
# ? Sep 4, 2023 17:20 |
|
god drat, i just remembered that one clearly
|
# ? Sep 4, 2023 17:21 |
|
quick maths (structureless, modulo peeps)
|
# ? Sep 4, 2023 17:25 |
|
Oysters Autobio posted:I would really love to hear a good post Cold War analysis of Soviet bureacracy and all the negative aspects that can be attributed to that experiment with structured and centralized control. I have no answer to this, but the sort of recurring theme I see about centrally managed systems is that:
If you want centralized structures that aren't bureaucratic, I think they'd be expected to have severe size or complexity limits, or to operate in particularly stable environments. That being said, there isn't strict literature I've seen on "here are central management systems that work great and scale up more than we thought"—it feels like we're more or less on a continued timeline of having believed that to be possible and social scientists and others constantly finding out how that idea was inaccurate. bob dobbs is dead posted:god drat, i just remembered that one clearly A coworker has thrown this one my way: https://www.sciencedirect.com/science/article/pii/S1090513823000557 from sfi, I want to read it at some point, though I'm always a bit doubtful about papers that seem to be gunning for the formula of life. There's cool insights in them about what sort of behaviours may happen but sometimes it's hard for me to know how serious they are about the applicability as opposed to just developing a nifty idea for the sake of it. MononcQc fucked around with this message at 20:12 on Sep 4, 2023 |
# ? Sep 4, 2023 20:10 |
|
i got the martians quote from jeff shrager, who was herb simons last grad student and who wrote me one of my rec letters for grad school, btw
|
# ? Sep 4, 2023 23:04 |
|
bob dobbs is dead posted:i got the martians quote from jeff shrager, who was herb simons last grad student and who wrote me one of my rec letters for grad school, btw shrager was right?
|
# ? Sep 4, 2023 23:05 |
|
This week I read David Woods' The Strategic Agility Gap: How Organizations Are Slow and Stale to Adapt in Turbulent Worlds, an open access chapter that sort of surveys and puts together a lot of the concepts he has written about in the past, particularly around the need of organizations to balance growth in capabilities with the ability to adjust to the changes they enable. The idea here is that growths in capability—often due to better technology—brings rapid changes at a societal level: new opportunities are found, complexity grows, and new threats emerge. New capabilities generally mean growth, expansion, bigger scales, and more interactions, which means more surprises. On the other hand, organizations are generally slow and stale when it comes to adapting to these threats or to seize these opportunities: quote:As capability grows to improve performance on some criteria, interdependencies become more extensive and produce surprising anomalies as the systems also become more brittle. This figure is attached: Because the risks are difficult to see ahead, and that the growth is continuous, there's a risk of cascade to disturbances and challenges; this requires anticipating challenges and building a "readiness-to-response" to avoid having to generate and deploy them while the challenge is taking place. Here the text seems to intent something different from just having a plan for specific challenges; the words used are "organizations need to coordinate and synchronize activities over changing tempos, otherwise decisions will be slow and stale". This hints at overall response patterns and reorganization more than having a runbook with specific scenarios. To provide an example of a failing and a successful case, Woods covers the Knight Capital Collapse from 2012 (other great link) and of a transport company dealing with Hurricane Sandy (illegal source). In the case of Knight Capital, they rolled out code that reused an old feature flag that had been repurposed, and the deployment failed on a single out of eight servers. When it went live, it produced unexpected behavior that ran more transactions than expected; rolling it back produced even more anomalous behavior due to the flag. People involved struggled to understand the issue. Woods mentions that it took a while before upper management was informed and then authorized to stop trading. By then, it had been less than an hour, but it was too late and the company went bankrupt from their now untenable market position. The author picked it as an example that shows that:
The comparative case of a large transportation firm that reconfigured itself during hurricane Sandy has the following elements named behind their effective adaptation. Quoted literally from the text, they:
This, Woods mention, helped balance what is called the efficiency-thoroughness trade-off. Also noted ETTO, this is a principle that states that needs for safety tend to reduce efficiency, and demands for productivity tend to reduce thoroughness. All of these are because people are limited on time and these two values are in tension. Specifically, they sacrificed economics and standard processes to keep up with events, by using patterns that existed within the organization already given adapting to surprises was a normal experience. In comparing both cases, the author mentions that following plan is not enough in these situations. There's a need for anticipation and initiative, particularly when events challenge existing plans. The difference between both organizations is that for the transportation company: quote:From facing surprises in the past, the varying roles/levels had opportunities to exercise their coordinative ‘muscles,’ even though this specific event presented unique difficulties. In the strategic agility gap, the challenge for organizations is to develop new forms of coordination across functional, spatial, and temporal scales—otherwise organizations will be slow, stale and fragmented as they inevitably confront surprising challenges. While I personally feel the time scales between cases are very different for the comparison, they probably do a decent job of demonstrating the types of behaviors on each side of the accelerated trajectory line. The paper shifts toward a "Systems are messy" section, recalling the wold WWII term SNAFU, standing for "Situation Normal: All hosed Up". Standard plans inevitably break down, and some people in some roles do "SNAFU catching", often in hard to detect manners: quote:all organizations are adaptive systems, consist of a network of adaptive systems, and exist in a web of adaptive systems—i.e., the resilience engineering paradigm. All human adaptive systems make trade-offs to cope with finite resource and all live in a changing world. The pace of change is accelerated by past successes, as growth stimulates more adaptation by more players in a more interconnected system. The point here is that operating within the strategic agility gap is unavoidable. Organizations love to rationalize this away:
Woods states directly that these rationalizations are wrong empirically, technically, and theoretically. When framing surprises as deviations from the established plan, the compliance pressure that follows undermines the system's adaptive capacities. The background of improvements and a sudden collapse surprises and confuses people within the system. The argument here is that this is normal: as scale and interdependencies increase, performance increases, but so does the proportion of large collapses and failures. The Resilience Engineering statement here is that what we shouldn't be surprised by the failures, but by how few of them we have. One of Woods favorite laws is the fluency law, which states: quote:well adapted activity occurs with a facility that belies the difficulty of the demands resolved and the dilemmas balanced. The reason we see so few failures is that adapting to SNAFUs continually takes place, and that it is nearly invisible. It is, in fact, one of the tenets of resilience engineering. Past successes in these situations drive effective leaders to take advantage of improvements and drive the systems to do even more, and this creates adaptive cycles which accelerate the strategic gap. Organizations end up living in that strategic agility gap, and to thrive in there they need to develop and sustain the ability to continuously adapt. Resilience Engineering researchers turn to web operations in order to study this: outages and near-misses are incredibly common even in the best organizations, and things change so fast that they provide a great laboratory to study constraints and shifting opportunities and risks. The key ingredients identified are:
To express and apply initiatives, there's a need to push it down closer to action; this can be miscalibrated in a way that fragments efforts and makes units work at cross-purposes. Since we can't just enforce plans harder, resilience engineering seeks system architectures that can adjust the expression of initiative as the potential for surprises varies. This requires to prioritize and sacrifice some goals as conflicts arise. Proactive learning is key there—and not just learning from events that cause economic loss or cause harm after they have happened. There's also a good call for reciprocity, which I'll use as the author's closing words: quote:Effective organizations living in the gap build reciprocity across roles and levels. Reciprocity in collaborative work is commitment to mutual assistance. With reciprocity, one unit donates from their limited resources now to help another in their role, so both achieve benefits for overarching goals, and trusts that when the roles are reversed, the other unit will come to its aid.
|
# ? Sep 24, 2023 02:59 |
|
This week, I wanted to cover a collective paper written by a group of researchers and artists collaborating together, titled AI Art and its Impact on Artists. The paper starts with an overview of how image generation works at a high level, starting with Convolutional Neural Networks (CNNs) doing image recognition, variational autoencoders (VAE) which used mirrored neural networks to enable the first generative models (such as VQ-VAE-2), followed by generative adversarial network (GAN) where two competing neural networks try to please a discriminator (often a third network evaluating how realistic an image is). This later type of tech eventually got augmented with an ability to consider tags defining the data, and was used for images as large as 512x512. Natural Language Processing (NLP) allowed increasing the complexity of tests and generated images, and the inclusion of Large Language Models (LLMs) led to natural language prompts. Eventually (in the last few years), diffusion models inspired by fluid dynamics—they apply noise to an image and then de-noise the results—led to models not constrained by natural language understanding. This lands us close to where we are with Stable Diffusion, DALL-E, Midjourney, and others. Models of these types are trained on large image datasets such as JFT-300M or LAION (which has sub-variants), which contain hundreds of millions to billions of image-text pairs. In total, the paper lists roughly 20 commercial products using various datasets. The authors point out that there's a tendency to anthropomorphize image generators, of talking about them like they're artists, even going as far as saying they are "inspired" by the data in their training set. The authors disagree, and present us with some philosophy of art to support their point, defining art as a uniquely human endeavor connected specifically to human culture and experience: quote:[W]hile non-human entities can have aesthetic experiences and express affect, a work of art is a cultural product that uses the resources of a culture to embody that experience in a form that all who stand before it can see. [...] Further, this process must be controlled by a sensitivity to the attitude of the perceiver insofar as the product is intended to be enjoyed by an audience. [...] This control over the process of production is what marks the unique contribution of humanity: while art is grounded in the very activities of living, it is the human recognition of cause and effect that transforms activities once performed under organic pressures into activities done for the sake of eliciting some response from a viewer. As an example, they mention a robin singing or a peacock dancing under organic pressures, but human song and dance serving purposes different from organic ones, including cultural ones and communication. Image generators however do not have that understanding of the perspective of the audience, and do not undergo a similar artistic process. Instead, they imitate whichever parts of the process are embodied in the works within the training set—works from image generators may be aesthetic, but not artistic: true artistic works generally require to also be aesthetic, but this latter point is mostly limited to technique, which isn't sufficient to be truly artistic. This plays out in how image generators can give good results, but to do so require extensive training to be shown what the "right" output should be, whereas humans do not require such criteria. This makes image generators great at copying style, but, the authors say, it is very rare for artists to be able to copy each other's styles: quote:The very few artists who are able to do this copying are known for this skill. An artists’ ‘personal style’ is like their handwriting, authentic to them, and they develop this style (their personal voice and unique visual language) over years and through their lived experiences. In short, the development of an artist's style comes from repeated interactions with their environment and culture, and there's a cycle of influence and impact shaping it. It is unique to each of them and does not come in isolation, but from active participation and growth in a way that is constantly evolving. By comparison, image generators, once trained, stop changing until they are explicitly trained again, either from scratch or fine-tuning. The abstract interpretations and sentimental imagery are missing, the paper argues. quote:image generators are not artists: they require human aims and purposes to direct their “production” or “reproduction,” and it is these human aims and purposes that shape the directions to which their outputs are produced. However, many people describe image generators as if these artifacts themselves are artists, which devalues artists’ works, robs them of credit and compensation, and ascribes accountability to the image generators rather than holding the entities that create them accountable. This is why we need to be really careful about the words we choose to describe image generators. Anthropomorphisation shifts accountability and credit in a distinct way between the automation, the stakeholders who produce and train them, and the artists whose output is used to train them. Impact on Artists The paper at this point shifts in covering the impact of AI art on artists, under many lenses:
For economic loss, the argument is that an artist's style is formed over years of honing their craft through practice, observation, schooling, and costs of materials (books, supplies, tutorials). Their output is then used without compensation by companies like Stability AI—companies backed by billions from venture capitalists—who then compete with them directly in the market. Folks like Sam Altman of OpenAI specifically call out the expectation to replace creatives' jobs; Stability AI CEO Emad Mosque has accused artists of wanting to have a “monopoly on visual communications” and “skill segregation”. The paper retorts: quote:To the contrary, current image generation business models like those of Midjourney, Open AI and Stability AI, stand to centralize power in the hands of a few corporations located in Western nations, while disenfranchising artists around the world. The behavior observed is that image generators can output content much faster and cheaper, but without nearly as much depth of expression as a human. They allow flooding the market with "acceptable" imagery that will supplant demand for artists. The paper then covers multiple examples of this happening already in TV series, movies, and gaming industries. While this hurts fully employed artists, they point out that self-employed artists are also likely to suffer. They point out the example of the Clarkesworld science fiction magazine, which got flooded so much by AI-generated sci-fi that they had to stop accepting all submissions, and eventually re-opened them while only accepting submissions from previously published authors. The net impact, they say, is that rather than democratizing art, the number of artists who can share their work and receive recognition is reduced. Many artists already have to use image generators in order to keep their jobs, and report having their role slowly shifting to "clean up work, with no agency for creative decisions". Basically, if they want to keep working, they have to make the output of image generator good enough, which reinforces the pattern that de-skills their work. Actual artwork allowing full creative control is increasingly likely to only be affordable to people who are already independently wealthy, and to stall development of artists from other backgrounds. In terms of digital artwork forgery, the lack of consent and attribution also is problematic. Copyrighted images and photographs are used to train image generators, which often produce near-exact replicas. While artists have increasing trouble living from their art, some companies directly market their ability to replicate their style. Often, the artists' name is associated (because it's their style) by people who asked for the images to be generated, and their reputation slowly gets tied to images they wouldn't have agreed to produce. In some cases, they are used in harsher situations such as harassment, hate speech, or genocide denial. This existed before image generators but is faster now. Artist Sarah Andersen states: quote:"Through the bombardment of my social media with these images, the alt-right created a shadow version of me, a version that advocated neo-Nazi ideology... I received outraged messages and had to contact my publisher to make my stance against this ultraclear.” She underscores how this issue is exacerbated by the advent of image generators, writing "The notion that someone could type my name into a generator and produce an image in my style immediately disturbed me... I felt violated” Going to hegemonic views and stereotyping, the authors report that underrepresented groups, those more used to being more invisible, can attest to seeing a distortion of themselves in the output of image generators, often warping reality based on stereotypes: quote:For instance, [Senegalese artist Linda Dounia Rebeiz] notes that the images generated by Dall-E 2 pertaining to her hometown Dakar were wildly inaccurate, depicting ruins and desert instead of a growing coastal city. The objectification of some cultures goes further, where "synthetic models" are generated and licensed to organizations, and the benefits go to the people who generate the images rather than people from the cultures off which they are based. Once again, this brings back the question about where credit, attribution, and accountability ends up being distributed. This is where chilling effects on cultural production and consumption come in play. Since many artists already struggle to make ends meet and that the job prospects are rapidly worsening, students are dissuaded from honing their crafts, and both new and current artists are more reluctant to share their work to protect themselves from mass scraping. This causes tension, because they often build their audience and visibility by sharing content on social media, crowdfunding platforms, and trade shows, but are now incentivised against doing that to protect themselves from the unethical practices of corporations profiting from their work: quote:Artists’ reluctance to share their work and teach others also reduces the ability of prospective artists to learn from experienced ones, limiting the creativity of humans as a whole. Similar to the feedback loop created by next generations of large language models trained on the outputs of previous ones, if we, as humanity, rely solely on AI-generated works to provide us with the media we consume, the words we read, the art we see, we would be heading towards an ouroboros where nothing new is truly created, a stale perpetuation of the past. What the authors are warning against is a potential feedback loop by which art stops progressing and becomes stale. AI Art, US copyright law, and Ethics The paper uses words such as unethical when describing image generators, and this section mostly gives weight to that element. Currently, it isn't exactly clear whether the way image models are trained represents copyright infringement. Class action lawsuits are kicking off, and the scales in play here in terms of the number of artists involved is somewhat unprecedented. What the authors assert here is that these unanswered legal questions about whether copyright applies are used by the companies producing image generators to operate without accountability, so long as they aren't being sued for specific violations of existing copyright law. Since courts take time to work, economic and social harms to artists are allowed to go on. In terms of authorship, for example, the generated images are not copyrightable under US law, although the prompts used might be copyrightable if they are independently creative. So the iterative work that requires continuous transformations is somewhat hard to define copyright-wise. The way artists interact with the tools may end up defining the status, and given uncertainty here, they call for more caution. One of the major arguments used by the producers of image generators is the concept of fair use: quote:Fair use is a doctrine in copyright law that permits the unauthorized or unlicensed use of copyrighted works; whether it is to make copies, to distribute, or to create derivative works. Whether something constitutes fair use is determined on a case-by-case basis and the analysis is structured around four factors.
The authors call out "data laundering" practices that roughly work as follows:
Most existing or suggested mechanisms to protect artists (eg. watermarking) either don't work, or put the responsibility on artists to prove harm for any action to be taken. The paper calls for better accountability of the entities who create image generators in the first place, rather than on the artists. They advocate for legislation that prevents training models without artists' consent, funding AI research that isn't tangled with corporate interests, and to evaluate and task work based on how they can serve specific communities. This, however, would require shifting ML researchers' point of view to be aware of their relationship to power, rather than assuming their technology is neutral and usage isn't their responsibility. The authors conclude: quote:Image generators can still be a medium of artistic expression when their training data is not created from artists’ unpaid labor, their proliferation is not meant to supplant humans, and when the speed of content creation is not what is prioritized. [...] If we orient the goal of image generation tools to enhance human creativity rather than attempt to supplant it, we can have works of art [...] that explore its use as a new medium, and not those that appropriate artists’ work without their consent or compensation. MononcQc fucked around with this message at 04:38 on Oct 8, 2023 |
# ? Oct 8, 2023 04:31 |
|
neural nets are economic machines inasmuch as they're optimizational things first and foremost, the generation or whatever is just poo poo stuck on top of backprop. so to say that they have economic impacts in IP, job whatever, whatever yadda yadda is myopic in nature: they pose economic questions directly. to the nn toucher, intermediate representations (them bein little shits) and numerics are the two basic serious problems in modelling and backprop solves the former - which is why it hasn't been replaced by anything else over 50 years - and the long struggle has basically been the second (the mass data collection and fuckin with models is the maniacal solution to the second). economists never give one gently caress about either one. coase's theory of the firm talks about coordination costs in a way that you can tell coase doesn't respect or think about intermediate distributed representation for example, and economists talk about dynamic equilibrium as if that wasn't an oxymoron. so i dont think its just questions about intellectual property but property at all. it is worth noting that the list of steps for laundering the data yadda yadda poo poo is itself an instance of complaining about economic intermediate representations bein poo poo bob dobbs is dead fucked around with this message at 10:34 on Oct 8, 2023 |
# ? Oct 8, 2023 10:31 |
|
bob dobbs is dead posted:neural nets are economic machines inasmuch as they're optimizational things first and foremost, the generation or whatever is just poo poo stuck on top of backprop. so to say that they have economic impacts in IP, job whatever, whatever yadda yadda is myopic in nature: they pose economic questions directly. That's a fair enough point, though I'm not fully sure the connection between IP, property, and direct monetary value is absolute, even if it seems pretty drat solid. I have to say I sometimes have a bit of a tough time reading you or your stance. Combined with the repo you linked here before, would you generally say you're the type of person to have that sort of high-level model or framework to think about things that often harks back to some variant of a constraint propagation problem or some literal matrix representation and everything somehow connects to such a perspective. Is this any close to right? I mean I have my own fuzzy ways to think about things that are really hard to put into words so I know that's hell of a tricky question to ask, but I've been wondering.
|
# ? Oct 12, 2023 01:58 |
|
theories of value are theories of how prices arise. obviously prices are only meaningful in ensemble (sets of prices are the thing) and they renormalize (if every single thing has a zero added including wages, banknotes, accounts, etc thats a noop), and this and like 7 other reasons are why i think they are a distributed representation. closest strictly speaking field is mechanism design but i dont care about markets, i care about firms because firms are more hosed up neural net studying is just studying distributed representations, only you can do simulations without your pants at 2am in 20 seconds to 20 hours and are expected to do so, as opposed to economics where experiments take like, 20 years and 50,000 dead, so we just know a lot more about distributed representations and how to get em to optimize in neural land rumelhart didnt see a material difference between the ordinary csp regime and backprop poo poo - i tend to agree w him, but that sort of csp peep hates the neural peep for academia reasons bob dobbs is dead fucked around with this message at 20:14 on Oct 12, 2023 |
# ? Oct 12, 2023 19:51 |
Newest MononcQc post is a banger: You want my password or a dead patient This summary is full of some pretty ingenious ways that people in the medical field get around the restrictions of the EHR systems. The article hit pretty close to home for me, because my wife is an optometrist, and she will frequently talk about how janky EHR systems add an hour or two to her work every day. EHR systems should have made doctors' lives significantly easier, but clearly that didn't happen, so it is fascinating to see a detailed breakdown of all the ways that EHR fails medical professionals.
|
|
# ? Nov 24, 2023 22:32 |
|
VikingofRock posted:Newest MononcQc post is a banger: You want my password or a dead patient Unfortunately, I had posted it in this thread first—I’ve just been slowly converting posts here into blog posts over time because I find myself wanting to share them. I took a big break from paper reading lately to work on my toy projects, close the yard, write a bit (in French), and read books. I’ve read an interesting paper this week from Nora Bateson that was about to annotate, but it’s a bit of a challenging one and it’s more of an essay than a paper, but maybe I’ll have the time to cover it next week or something.
|
# ? Nov 24, 2023 22:40 |
|
Can't wait to hear shaggar's opinion on how things like thesequote:One example given is that one Electronic Health Record (EHR) system forces clinicians to prescribe blood thinners to patient meeting given criteria before they can end their session, even if the patient is already on blood thinners. So clinicians have to do a risky workaround where they order a second dose of blood thinners to log out (which is lethal if the patient gets it), quit the system, then log back in to cancel the second dose. quote:There's also a case where a doctor couldn't find the required medication in the software. He found a custom field with free text where he noted the prescription, but the box was not visible on the other end so the prescription was never given and the patient lost half his stomach. are actually the system working as intended and/or the doctor's fault
|
# ? Nov 24, 2023 22:55 |
|
Sagebrush posted:Can't wait to hear shaggar's opinion on how things like these aside from admitting EHRs are lovely, he called them whiny babies and also blamed administrators: Shaggar posted:EMRs are all really loving bad for sure, but then you throw on top of that how lovely hospital administrators are and how doctors are all whiney babies, and theres basically no system they wont gently caress up. Shaggar posted:reasonable person: "im gonna solve authentication by giving these doctors prox cards!"
|
# ? Nov 25, 2023 00:22 |
|
Sagebrush posted:Can't wait to hear shaggar's opinion on how things like these The first one is bad design created by a group of doctors and the second is a doctor avoiding using the system because they are lazy and/or doing something wrong and/or ran into another bad design by the aforementioned doctors. I think what alot of people dont understand about EHRs is that they're hand built/configured based on the needs of doctors and patient care isnt really a consideration. Specifically with the first record it probably went down something like: "hey i always forget to enter in the script after i talk to the patient about it. Can you have the system remind me to do it?" "Sure we can provide a warning for any closed records where the patient is missing a treatment plan for a captured condition" months pass "Hey im still forgetting to enter the script because i dont look at the warnings, can you make it force me to enter it in?" "We can but its going to impact everyone, not just you" "Thats fine" WRT the actual problem its either that the implementation was naïve, i.e. "if patient condition needs blood thinners then require new blood thinner prescription. " or the implementation is good and checks for things like an existing blood thinner prescription and the existing prescription is not properly coded and the system cant tell its for blood thinners. This could be: Doctor is using the notes to store prescriptions and nurses know to look there. Doc entered a manual prescription because he couldnt find it in the list so there are no codes. Existing prescription has off-label blood thinning effects either not coded (as the FDA doesnt recognize them) or the implementation checking for contraindications didnt take coded side effects into account. etc.. etc... The case where you cant find the medicine is usually a bad UI that is compounded by docs not wanting to deal with even the slightest bit of resistance. Like if they do a search for the medicine and misspell it and your search doesnt handle the misspellings they're gonna blame the system, say they cant find it, and stick it in the notes (probably also misspelled). EHRs are extremely bad for the most part, but thats because they're designed by and for what doctors think they need instead of what they actually need. EPIC's whole thing is that they will build you what ever custom piece of poo poo you want. Its an entire industry making the most expensive homer cars to order.
|
# ? Nov 25, 2023 00:32 |
|
I received comments on that post on cohost from a bunch of people, some of which were interesting:quote:Just about twenty years ago, I worked for a division of a no-longer-extant conglomerate that produced software for hospitals, warehouse and store rooms, in our particular office's case. It remains one of the few jobs where I had to raise my voice regularly, because every decision involved someone saying "well, how can we know how the end users would use this," and me gesturing emphatically out the conference room window at the hospital across the street. I wanted to walk over and ask people for help, but they hated that idea, in favor of working from even-then-outdated advice like the password expiration nonsense mentioned early here... quote:I was a nurse at a healthcare company that had a group of clinicians come into the IT department to "give feedback on the software"--something we were all very eager to do. But then we spent the entire time doing basic testing like "when you click on the start button, does the program start?" and I never did figure out why they needed nurses for that. I'm sure it can manage to be a mix of both over-indexing on the needs of some users and generally not getting the feedback or any observational data from broader users either.
|
# ? Nov 25, 2023 00:53 |
|
yeah you absolutely cannot ask operations what they want because they will waffle back and forth between asking you to replicate the same bad process that they use today or a totally new process that doesnt do what they need. Its classic homer car. but you also cant ignore operations because they're the ones doing the work. The best solution is getting to the heart of what they're trying to accomplish rather than what features they would like that they think will accomplish that task. i.e. You want to ask "What are you trying to bake?" not "Would you like a feature where you can change the frosting flavor?". If they're baking bread and not cakes that feature doesnt help them and you're relying on them to understand that frosting doesnt go on bread. You might think they should know this because they're bakers, but they will still demand the feature. This could be because they think you're the expert (you are making the software after all and management says we need the software), or they dont actually know what baking is, or they're a psycho who will slather frosting all over the bread if you let them. And then you probably have some british guy who refers to cake as sugary bread or something so he thinks its totally fine. That kind of good requirements gathering of course never happens, so your actual best bet is finding 1 or 2 people from the operations group who actually know what they're doing and getting it from them even if its a back channel thing. Also, telemetry. I loving love telemetry. Just the number of bugs its let me fixed alone is wonderful, but you can see what the users are actually doing vs what they say they're doing and its so incredibly valuable.
|
# ? Nov 25, 2023 01:22 |
|
i like that they tricked those nurses into doing free QA testing, that's a good one
|
# ? Nov 25, 2023 01:24 |
|
Shaggar posted:The best solution is getting to the heart of what they're trying to accomplish rather than what features they would like that they think will accomplish that task. i.e. You want to ask "What are you trying to bake?" not "Would you like a feature where you can change the frosting flavor?". in industrial design the expression for this is "people want toast, not toasters"
|
# ? Nov 25, 2023 03:10 |
I want a toaster that makes toast from bread
|
|
# ? Nov 25, 2023 03:18 |
I'm sure someone has tried to smell pre toasted bread and it was Bad
|
|
# ? Nov 25, 2023 03:18 |
Foods are special in that if they were radioactive their half life would kill everyone
|
|
# ? Nov 25, 2023 03:19 |
|
PokeJoe posted:I'm sure someone has tried to smell pre toasted bread and it was Bad thats rusks. you can get em for babies. also hardtack, which is still popular in alaska, japan and korea
|
# ? Nov 25, 2023 03:24 |
|
|
# ? May 3, 2024 07:57 |
that's not the same thing. toast is actually good
|
|
# ? Nov 25, 2023 03:24 |