|
or, in general, the quality of work will drop just a bit more as gpt output becomes "good enough", because no one is going to pay to do something well when it can be done poorly for free and there's functionally no difference between the two in terms of monetary returns
|
# ¿ Mar 22, 2023 13:19 |
|
|
# ¿ May 14, 2024 18:40 |
|
i work with a number of artists and artist adjacent people (collectors, gallery owners) specifically because the one thing they have no interest in knowing is what buttons to push. it's not a thing that fits into their mental model for the world, and for quite a while that was fine.
|
# ¿ Mar 22, 2023 13:51 |
|
i'm not saying it's the end of art, any more than it's the end of writing. people will still draw and paint and write and create because that's what they want to do, they just will have a much harder time monetizing it, and i think it's going to be almost impossible to make available for free without it immediately getting ripped off by the plagiarism engines on the commercial side, you'll see a general dip in quality across the board, mirroring what you already see out of lovely content mills, because the output is good enough for free that there's no sense paying for anything better
|
# ¿ Mar 22, 2023 14:00 |
|
lol. remember songsmith? is there a music gpt model yet?
|
# ¿ Mar 22, 2023 14:12 |
|
still working out what glasses look like, and, well, refraction
|
# ¿ Mar 31, 2023 13:17 |
|
you could reasonably guess a valid windows 95 serial
|
# ¿ Apr 1, 2023 00:41 |
|
#aicinema, an interview with someone using midjourney and photoshop
|
# ¿ Apr 8, 2023 02:59 |
|
rotor posted:when i think 'yospos' i think 'sophisticated humor and wordplay' oh, word?
|
# ¿ Apr 11, 2023 05:11 |
|
using a gan to generate a captcha seems deeply perverse
|
# ¿ May 23, 2023 00:38 |
|
just based on the abstract, yes, gpt based llms are being intentionally designed to provide responses in a way that anthropomorphises them
|
# ¿ May 29, 2023 15:44 |
|
@mediaphage, re: your recent statements on llm cognition ...
|
# ¿ May 29, 2023 17:03 |
|
fart simpson posted:i skipped 5 pages. did i miss anything important ask chatgpt for a summary
|
# ¿ May 30, 2023 16:06 |
|
Paladin posted:Also shares some interesting insights about AI glasses and the future of AI in music. pretty good, it wrote two lines about my posts and one of them is objectively wrong
|
# ¿ May 30, 2023 18:34 |
|
i do certainly lack the capacity to imagine the value of generating vast quantities of grammatically accurate text with little to no relationship to any kind of reality, outside of seo and advertising if what it outputs needs to be convincing but inaccurate, it's fine. if you need anything else, an llm is not going to do the job assuming that this will somehow be "solved" is a lot like assuming "autonomous driving" will be solved in any way that doesn't involve redefining the term to match the actual capabilities of the tool it's an interesting technology in a very academic sense, because the practical applications of grammatically correct nonsense are fairly limited and you cannot ever guarantee that it will output anything else infernal machines fucked around with this message at 23:01 on May 30, 2023 |
# ¿ May 30, 2023 22:57 |
|
mediaphage posted:again, i think the problem when considering the utility of these tools is because at the moment they’re mostly used on their own, unconnected from useful and verified sources of information. i think the practical application of these tools is incredibly niche because the ability to create grammatically correct sentences completely without an understanding of the source or context is limited, and the underlying concept of tokenizing existing content and statistically correlating to generate output is not going to result in anything more complex than that within our lifetimes shitposting as a service could reasonably replace huge swaths of the something awful forums, but i don't know that there's a commercial application for that beyond duping rubes if you need it to generate something better than what you've written already, you have a problem unless you're functionally unable to write in the language you've chosen, and even then you have a problem, because you probably can't effectively vet the output infernal machines fucked around with this message at 00:41 on May 31, 2023 |
# ¿ May 31, 2023 00:37 |
|
an infinitely more verbose version of is not actually an effective translation tool
|
# ¿ May 31, 2023 00:52 |
|
Beeftweeter posted:
why not? the word citation often appears in sentences, and "the oxford dictionaries" are an often used citation for words
|
# ¿ May 31, 2023 01:18 |
|
NoneMoreNegative posted:Thread: https://twitter.com/DerBren/status/1663500637739384832
|
# ¿ May 31, 2023 01:45 |
|
oh absolutely, as long as you're an investor and we're sufficiently creative with the definition of "solve"
|
# ¿ May 31, 2023 01:49 |
|
if you find yourself marvelling at this stuff, just remember that plowing your mom resulted in a significantly more complex cognitive model than anything these clowns have managed to date, with a considerably lower upfront cost
infernal machines fucked around with this message at 02:08 on May 31, 2023 |
# ¿ May 31, 2023 01:59 |
|
jemand posted:The other lesson I'm learning, though, is just how little quality matters in the modern business. In the sense that these are targeting "worse, but cheaper" market segments, and forcing everything down those lines by removing any quality offering from the market at all, it may be a lot more successful than I think it will be. Throwing poo poo demos over the wall and everyone important getting out of there before the inevitable error explodes the business model is actually a probably pretty viable strategy for most startups.
|
# ¿ Jun 1, 2023 17:46 |
|
turns out it's not worth paying to do right, but it is worth paying to do wrong as long as you can get away from it before anyone notices
|
# ¿ Jun 1, 2023 17:48 |
|
i literally do not believe that is true i specifically do not believe that the "ai" decided to attack the operator for any reason like the one they suggest, and that if any of it actually happened at all, that they have even the slightest inkling why the operator was attacked
|
# ¿ Jun 1, 2023 20:11 |
|
sounds like a very simplified, apocryphal, version of one of these https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml
|
# ¿ Jun 1, 2023 20:19 |
|
Chalks posted:it seems hard to believe but it could have been trained in the simulation like one of those genetic algorithms where it's just doing completely random things to try to get the highest score, and randomly killing the operator tended to result in a higher score so it learned to do that. yeah, it's the specification gaming thing, the google sheet i linked is full of them and they're hilarious. it's just the tech/industry press doing that thing where a story becomes rather embellished as it's retold for a reporter, and again for a general audience
|
# ¿ Jun 1, 2023 20:21 |
|
"spooked by our own imaginations" was always an optioninfernal machines posted:i literally do not believe that is true https://twitter.com/harris_edouard/status/1664582667382267905 infernal machines fucked around with this message at 12:40 on Jun 2, 2023 |
# ¿ Jun 2, 2023 12:22 |
|
mediaphage posted:wow yudkowsky saying dumb made up poo poo im shocked i mean, the headline was basically written for him and his ilk
|
# ¿ Jun 3, 2023 03:10 |
|
the previous fifty years of SF were warning in advance about the dangers of people's own imaginations, and ain't nobody learned gently caress all otoh, cyberpunk has been warning people since, idk, shockwave rider, hell, maybe the machine stops, and at best people decided they were an instruction manual.
|
# ¿ Jun 3, 2023 03:14 |
|
this writing style is worse than the formal one, somehow
|
# ¿ Jun 5, 2023 04:22 |
|
it's like 2008 era sa. obviously the training set is to blame
|
# ¿ Jun 5, 2023 05:08 |
|
meanwhile, in toronto: https://twitter.com/BenSpurr/status/1668356551864733698
|
# ¿ Jun 12, 2023 21:57 |
|
as i said in the TO LAN thread, this is pretty indicative of the quality of the candidate. whatever, there's like 100 of them and there are maybe three good ones if this dipshit wants to pipe stable diffusion and gpt directly into his campaign materials, he can go hog wild because he's probably polling under 1% e: actually, he's polling at 11%, because sure, why not? infernal machines fucked around with this message at 22:26 on Jun 12, 2023 |
# ¿ Jun 12, 2023 22:08 |
|
that's a lot better than what it does stand for, certainly
|
# ¿ Jun 13, 2023 03:52 |
|
forum's getting closed at midnight, sorry you had to find out like this
|
# ¿ Jun 13, 2023 04:05 |
|
mondomole posted:i.e. how current generation LLMs are close to replacing lower knowledge workers. It's a shame that AI researchers feel the need to embellish an already amazing result and lose their credibility like this. they are not. that is literally not a thing an LLM is capable of doing like, even ignoring the "we faked the test completely to make a headline", large language models do not have any type of cognitive skill whatsoever. e: unless your definition of lower knowledge worker is someone writing content mill articles with no interest in veracity or accuracy, then yeah, fair enough infernal machines fucked around with this message at 02:55 on Jun 20, 2023 |
# ¿ Jun 20, 2023 02:51 |
|
fair, but even then i'd bet on 2 and 3 being wrong or partially wrong more often than right 2 requires analysis, which requires contextual knowledge 3 is considerably more difficult than people seem to think with anything but carefully prepared audio clips, or perfect diction in ideal recording environments as for 1, i mean, regex exists, so maybe LLMs can be a very computationally expensive regex replacement, but i still wouldn't actually trust the output to be accurate infernal machines fucked around with this message at 03:11 on Jun 20, 2023 |
# ¿ Jun 20, 2023 03:04 |
|
that sounds incredibly specific, so i'm curious, but i understand if you can't say more without doxing yourself.
|
# ¿ Jun 20, 2023 03:20 |
|
sounds a lot like digital haruspicy to me, but i can see GPT based models competing favourably with existing models if only because the accuracy probably isn't particularly high to begin with
|
# ¿ Jun 20, 2023 03:47 |
|
that's useful. any accuracy errors there in terms of brand/manufacturer matching?
|
# ¿ Jun 20, 2023 03:59 |
|
|
# ¿ May 14, 2024 18:40 |
|
mondomole posted:In this particular case, spot on. I can definitely come up with bad labels; the question is how much worse are they than existing ones and doing it by hand. One key challenge is that you can probably extract the most value from recent entities that aren't followed by anybody, but these also won't be in the GPT-4 dataset. In principle this is solvable since you can summarize the last N years of "important product developments" using quarterly filings by feeding in those filings and asking GPT-4 to label the entities before moving on to the sentiment prompts. But right now it's not practical to do this for all companies and then also give all company context before every prompt. This gets into the realm of needing to train your own models, and right now it's not cost effective to do that for what we would gain, which is some super noisy estimate of "good" or "bad." If Moore's law manages to kick in here, I can see how in a few generations of GPT we might be able to automate a lot of this kind of work. sorry, when i wrote "there" i was referring to that style of query rather than the specific example you used i'm curious why the roi on training your own model is so low though
|
# ¿ Jun 20, 2023 04:33 |