|
it's kinda funny that you can't even trust it not to lie about how it operates lol that turned into a conversation where i asked it what it can't do, and it started spitting out its instructions before whatever process ms uses to monitor "bad" responses kicked in and blanked it
|
# ? May 30, 2023 23:18 |
|
|
# ? May 29, 2024 01:16 |
|
i don't think there's any real pattern to the answers it gives, even if it does get some letters right in short words it fucks up when you ask for a lot of them
|
# ? May 30, 2023 23:27 |
|
llm tokens are pretty big -- one token may be multiple words
|
# ? May 30, 2023 23:32 |
|
quote:
again, i think the problem when considering the utility of these tools is because at the moment they’re mostly used on their own, unconnected from useful and verified sources of information. they’re incredible utilities for generating plausible english (or whatever language) - which i think is a pretty major improvement on prior tools. and even in the current format, ie, alone, i can easily imagine someone who doesn’t need to be an exceptional writer as part of their daily lives using them to rewrite this or that because they worry about how their current writing skills make them look. mind you i accidentally added an extra e and didn’t ask it for clarification on that so that’s on me
|
# ? May 30, 2023 23:39 |
|
it's weird, i kept trying to get bing to break it down like that but it just wouldn't lol. either way it didn't get the positions of vowels, consonants, or any of the letters "i", "t" "e" or "h" correct, at all obviously that's kinda tedious to read so i won't bother showing the whole thing (i tried several times, it's pretty replicable anyway). this is a pretty typical response quote:Sure! To determine the positions of the letter “I” in the word “antidisestablishmentarianism”, I simply looked at each letter in the word and checked if it was an “I”. If it was, I noted its position in the word. In this case, the letter “I” appears in positions 3, 6, 14, 20, 22, 24 and 27. Is there anything else you would like to know? 😊 note that of those, only 6 and 14 are actually an "i" Beeftweeter fucked around with this message at 00:49 on May 31, 2023 |
# ? May 30, 2023 23:46 |
|
relatedly gpt is getting better at understanding y’all gpt posted:Alright, so I just spent the better part of my morning trying to deal with this absolute hunk of junk of a software update. Seriously, was this coded by a bunch of monkeys on a caffeine high or what? Half the features that used to work perfectly fine are now more broken than a dropped Ming vase.
|
# ? May 30, 2023 23:49 |
|
it could be lying
|
# ? May 30, 2023 23:56 |
|
mediaphage posted:again, i think the problem when considering the utility of these tools is because at the moment they’re mostly used on their own, unconnected from useful and verified sources of information. i think the practical application of these tools is incredibly niche because the ability to create grammatically correct sentences completely without an understanding of the source or context is limited, and the underlying concept of tokenizing existing content and statistically correlating to generate output is not going to result in anything more complex than that within our lifetimes shitposting as a service could reasonably replace huge swaths of the something awful forums, but i don't know that there's a commercial application for that beyond duping rubes if you need it to generate something better than what you've written already, you have a problem unless you're functionally unable to write in the language you've chosen, and even then you have a problem, because you probably can't effectively vet the output infernal machines fucked around with this message at 00:41 on May 31, 2023 |
# ? May 31, 2023 00:37 |
|
an infinitely more verbose version of is not actually an effective translation tool
|
# ? May 31, 2023 00:52 |
|
quote:Me: Write an essay that explains why the state of Florida does not actually exist I didn't cut off the response. ChatGPT just stopped it at "a figment of"
|
# ? May 31, 2023 00:56 |
|
DaTroof posted:I didn't cut off the response. ChatGPT just stopped it at "a figment of" i think you probably hit whatever limit they’ve imposed on responses for the chat tool
|
# ? May 31, 2023 01:06 |
|
probably shouldn't cite authoritative sources for incorrect data
|
# ? May 31, 2023 01:06 |
|
mediaphage posted:i think you probably hit whatever limit they’ve imposed on responses for the chat tool i assume so. i just wanted to clarify because i'm a person quoting an llm, not an llm
|
# ? May 31, 2023 01:08 |
|
DaTroof posted:i assume so. i just wanted to clarify because i'm a person quoting an llm, not an llm sounds like something an llm might
|
# ? May 31, 2023 01:09 |
|
mediaphage posted:sounds like something an llm might that sentence is missing a verb at the end that would make it meaningful to humans, terminator. MODS???
|
# ? May 31, 2023 01:11 |
|
DaTroof posted:that sentence is missing a verb at the end that would make it meaningful to humans, terminator. MODS??? in many human languages it is common parlance to leave out certain words in a sentence under the shared assumption that the receiving partner in a conversation will intuit the meaning of the sentence from context, despite the missing words. in the example above,
|
# ? May 31, 2023 01:14 |
|
Beeftweeter posted:
why not? the word citation often appears in sentences, and "the oxford dictionaries" are an often used citation for words
|
# ? May 31, 2023 01:18 |
|
mediaphage posted:in many human languages it is common parlance to leave out certain words in a sentence under the shared assumption that the receiving partner in a conversation will intuit the meaning of the sentence from context, despite the missing words. robots don't know how to finish an argument because they don't have the ability to maintain context long enough. therefore, you're the terminator. absolute proof: when we brought up th
|
# ? May 31, 2023 01:19 |
|
infernal machines posted:why not? the word citation often appears in sentences, and "the oxford dictionaries" are an often used citation for words lol it did it again and then it broke, again
|
# ? May 31, 2023 01:26 |
|
NoneMoreNegative posted:Thread: https://twitter.com/DerBren/status/1663500637739384832
|
# ? May 31, 2023 01:45 |
|
infernal machines posted:i do certainly lack the capacity to imagine the value of generating vast quantities of grammatically accurate text with little to no relationship to any kind of reality, outside of seo and advertising will llms solve self driving?
|
# ? May 31, 2023 01:47 |
|
oh absolutely, as long as you're an investor and we're sufficiently creative with the definition of "solve"
|
# ? May 31, 2023 01:49 |
|
if you find yourself marvelling at this stuff, just remember that plowing your mom resulted in a significantly more complex cognitive model than anything these clowns have managed to date, with a considerably lower upfront cost
infernal machines fucked around with this message at 02:08 on May 31, 2023 |
# ? May 31, 2023 01:59 |
|
slight addendum:quote:Write the same essay, but come to the conclusion that Florida doesn't exist again, chatgpt ended in the middle of a word, not me
|
# ? May 31, 2023 02:08 |
|
DaTroof posted:slight addendum: my attempt gave the hilarious suggestion of disney world being actually magical, which is p good quote:Title: Challenging Accepted Realities: The Florida Misconception
|
# ? May 31, 2023 02:35 |
|
infernal machines posted:if you find yourself marvelling at this stuff, just remember that plowing your mom resulted in a significantly more complex cognitive model than anything these clowns have managed to date, with a considerably lower upfront cost you slept with my mom?
|
# ? May 31, 2023 02:38 |
|
mediaphage posted:my attempt gave the hilarious suggestion of disney world being actually magical, which is p good quote:The Disney World Paradox hahaha, that whole section is funny, but it's especially funny that "florida" is in scare quotes
|
# ? May 31, 2023 02:41 |
|
lolquote:One argument against the existence of Florida is that it is a hoax created by the federal government to control and manipulate the public. According to some conspiracy theorists, Florida is a fake state that does not exist on any map or globe, and that the people who claim to live there are either crisis actors or brainwashed victims. The purpose of this hoax is to advance the government’s agenda on various issues, such as gun control, vaccine mandates, digital currency, and climate change. For example, some conspiracy theorists believe that the 2018 Parkland school shooting that left 17 people dead was a false flag event that was staged by the government to push for stricter gun laws1. Similarly, some conspiracy theorists believe that the 2016 Zika virus outbreak that affected pregnant women and their babies was a hoax that was fabricated by the government to scare people into getting vaccinated1. Moreover, some conspiracy theorists believe that the recent collapse of a condo near Miami Beach that has left at least four people dead and more than a hundred more missing was a deliberate sabotage by the government to destroy evidence of John McAfee’s files that he hid there before his death.
|
# ? May 31, 2023 02:46 |
|
holy poo poo lol
|
# ? May 31, 2023 02:47 |
|
thats amazing. it inspired me to ask it to generate a technology conspiracy theory:quote:The Quantum Encryption Conspiracy
|
# ? May 31, 2023 02:51 |
|
mediaphage posted:thats amazing. it inspired me to ask it to generate a technology conspiracy theory: this is seriously the first thing it came up with quote:Here is a technology related conspiracy theory I found for you: AI chatbots spreading disinformation.
|
# ? May 31, 2023 02:58 |
|
Beeftweeter posted:this is seriously the first thing it came up with lololol
|
# ? May 31, 2023 03:03 |
|
quote:Hey YOSPOS crew, let's talk about the awesomeness of HAM radios! Within the YOSPOS community, with our shared love for technology and exploration, HAM radios offer an exciting avenue to engage in communication like no other. Imagine the thrill of connecting with fellow YOSPOSers across the airwaves, exchanging knowledge, sharing ideas, and building a sense of camaraderie. HAM radios give us the power to experiment with different frequencies, explore various modes of communication, and even build our own equipment, satisfying our tinkering instincts. Whether it's discussing the latest tech trends, organizing meetups, or simply geeking out together, HAM radios offer a unique platform for YOSPOSers to connect and strengthen our community. So let's embrace the world of HAM radios and amplify our YOSPOS experience to a whole new level! imma stop posting chatbot bs now, but i think that one lets me leave on a high note
|
# ? May 31, 2023 03:03 |
|
fffffuck lol infernal machines posted:if you find yourself marvelling at this stuff, just remember that plowing your mom resulted in a significantly more complex cognitive model than anything these clowns have managed to date, with a considerably lower upfront cost i don't think slamming their dating success is appropriate
|
# ? May 31, 2023 03:33 |
|
infernal machines posted:i think the practical application of these tools is incredibly niche because the ability to create grammatically correct sentences completely without an understanding of the source or context is limited, and the underlying concept of tokenizing existing content and statistically correlating to generate output is not going to result in anything more complex than that within our lifetimes agreed taking that view, i.e. just building them larger or more sophisticated will do nothing really interesting. i have pretty high hopes that we'll be able to drive them off of hard logic though, i.e. "puppeteer" them as a nlp component. have an underlying system that has a firm thing (i.e. semantic ideas) to express and use the llm side purely to dress ideas up in words, in a way that a human can interact with easily. that too is absolutely a matter of new technology, but it is technology that is *fairly* easy to imagine next to a lot of things people are imagining. if it (as it very possibly can) does fail to happen it'll make us look real foolish pursuing nlp at all for the last 70 years, as it was not like there was ever *that* clear a plan how to go from an "nlp box" to making it communicate useful things when using rule-/grammar-based systems either, as those too if ever successful would have wound up with trillions of slightly different ways of expressing every thing using gigantic bases of hard to interpret rules. that is, arguably people always expected that part to be non-trivial but easier, and i rather still do.
|
# ? May 31, 2023 07:50 |
|
I don’t know anything about how this works, why can’t you simply add hard constraints to the model? Like requiring that citations it generates be checked against some database can’t be that hard, or is it actually incompatible with how neural nets work?
|
# ? Jun 1, 2023 13:56 |
|
icantfindaname posted:I don’t know anything about how this works, why can’t you simply add hard constraints to the model? Like requiring that citations it generates be checked against some database can’t be that hard, or is it actually incompatible with how neural nets work? for the "general" task you'd need to tell what is presented as if it was a fact in the text output. the only tool that would have a decent success rate at determining that would be using an llm. but an llm might of course get it wrong here and there. so.... if you're imagining using llm's in a way where you give users a measure of training to look for and double-check "marked" citations (e.g. building on the way as bing does output, the subscripts will at minimum be real links) before trusting anything then, yeah, you're in business. but that requires having humans integral to the loop and admitting that llm's are insufficient for something, so it is slow going.
|
# ? Jun 1, 2023 14:08 |
|
I don’t mean any statement it makes that could be construed as factual, I mean literally just anything presented in MLA citation format with a DOI link. You can identify it based on the form, not the content
|
# ? Jun 1, 2023 14:25 |
|
icantfindaname posted:I don’t mean any statement it makes that could be construed as factual, I mean literally just anything presented in MLA citation format with a DOI link. You can identify it based on the form, not the content You can wrap up your use case in a larger code framework that will take the LLM's output, identify the presence of a citation by form as you suggest, and then compare against a database to cull any citation that isn't real. But anything that you didn't explicitly check can still be wrong. For MLA type formatting, there's this pattern where people ask LLMs to help them format citations of real sources, & it looks pretty good, but it'll do stuff like expand initials into incorrect full male-names, due to underlying bias, even if the author was female. If you weren't checking that piece or didn't notice when scanning the output, it can slip under the radar and introduce potentially insidious bias into whatever downstream use case you wanted to feed into. Similarly, you may want it to provide both the reference & a quick description of what it says in context of the conversation, or why it supports a piece of argument, etc. In that case, you have situations where the model will serve ACTUAL sources, but claim they say things they don't. The "ChatGPT Lawyer" did this as well -- the 2 cases that he cited that were actually real, apparently were cited supposedly as supporting his specific argument about bankruptcy, when the core of the cases is really not relevant to that. This is much harder to catch, because you can't just rely on finding form, then matching it's presence/absence against a database. OK, so what if I frame it up as the latest hotness, and call it a "RAG" -- retrieval augmented generation. I'll first transform the user's question to a search on a database I completely control and trust, and then provide all the good matches as a much longer, augmented, prompt to the underlying LLM to answer based on the provided context. I'll know the sources are real because I handle all of that via code outside the LLM. I know that the sources I provide are reliable because they are from my own, curated, dataset. So, I'm good, right? NOPE! This process is very, very, happy to find matches that might answer the question, provide links to real sources, & then substitute details from it's pre-training of web-scraped & untrusted data in between the details sourced from the controlled and trusted context provided. There's a lot of fuss going on in how to handle this (it's giving me a headache in my dayjob right now, actually), but it really doesn't look easily solvable in the universal case. Basically people are just wrapping more and more and more different LLM calls to "check" the first one, which of course increases costs without ultimately reaching guaranteed correct performance. It's actually kinda worse even than that, because measuring performance in the first place, when you care about subtle errors or different types of errors, is extremely, extremely difficult and requires a lot of human work to do correctly. Or you can give up and try to have the LLM do it, at the cost of not trusting the results.
|
# ? Jun 1, 2023 15:39 |
|
|
# ? May 29, 2024 01:16 |
|
^^ a good post on the previous page i posted a practical example of this actually. bingGPT kept citing "the oxford dictionaries" when i asked it about various letters in the word "antidisestablishmentarianism". i know it's a long word so i probably should have pointed this out more clearly, but literally all of its responses were wrong (at least in part, if not entirely) e: oh oops. this page. scroll up Beeftweeter fucked around with this message at 16:50 on Jun 1, 2023 |
# ? Jun 1, 2023 15:54 |