|
i think it's super cool we can use tech to do pointless things noone wanted
|
# ? May 30, 2023 11:35 |
|
|
# ? May 29, 2024 01:19 |
|
doing pointless things does at least improve on what we usually use tech for
|
# ? May 30, 2023 11:59 |
|
NoneMoreNegative posted:Thread: that’s cool and all, but I want to describe to it in vague terms what is beyond the borders of the original image and have it fill in the blanks with that in mind
|
# ? May 30, 2023 13:24 |
|
i skipped 5 pages. did i miss anything important
|
# ? May 30, 2023 13:45 |
|
no
|
# ? May 30, 2023 13:50 |
|
thanks
|
# ? May 30, 2023 13:59 |
|
fart simpson posted:i skipped 5 pages. did i miss anything important ask chatgpt for a summary
|
# ? May 30, 2023 16:06 |
|
infernal machines posted:ask chatgpt for a summary I had to use chatGPT to build a post retrieval system first to make copy pasting posts easier, then complain to it after it didn't understand how quotes work. Paladin posted:Now, please summarize the entire thread, including highlights of humorous moments and viewpoints of each user, but in a more conversational style, ideally mimicking the posting style of YOSPOS (but don't overdo it). ChatGPT4 posted:First off, we got Juul-Whip tossing around some high praise for a cloud-based application repository. Pretty standard tech hype. There's a joke or two hidden in there, but it's mostly just them pointing out that this stuff can be used in a ton of different ways.
|
# ? May 30, 2023 18:18 |
|
Paladin posted:Also shares some interesting insights about AI glasses and the future of AI in music. pretty good, it wrote two lines about my posts and one of them is objectively wrong
|
# ? May 30, 2023 18:34 |
|
https://twitter.com/Liv_Lanes/status/1663335430702481409?s=20 (also my kingdom for an E0)
|
# ? May 30, 2023 18:35 |
|
infernal machines posted:objectively wrong idk sounds like your posts all right
|
# ? May 30, 2023 18:35 |
|
Paladin posted:I had to use chatGPT to build a post retrieval system first to make copy pasting posts easier, then complain to it after it didn't understand how quotes work. lol. it got my username wrong somehow and also the content of the post
|
# ? May 30, 2023 18:40 |
|
Beeftweeter posted:lol. it got my username wrong somehow and also the content of the post i think we all know beefwetter is the superior username and actually you're wrong if you dont change it
|
# ? May 30, 2023 18:46 |
|
Beeftweeter posted:lol. it got my username wrong somehow and also the content of the post I think I made a typo and it just went along with it. And yes I have read your username as Beefwetter for years because mediaphage posted:i think we all know beefwetter is the superior username and actually you're wrong if you dont change it
|
# ? May 30, 2023 18:47 |
|
thinkin bout that beef wet
|
# ? May 30, 2023 18:52 |
|
mediaphage posted:i think we all know beefwetter is the superior username and actually you're wrong if you dont change it hmm. we could do with a new namechange thread
|
# ? May 30, 2023 19:02 |
|
i don’t remember having chatgpt write a poem about steve ballmer
|
# ? May 30, 2023 19:03 |
|
i don’t remember having chatgpt write a poem about steve ballmer
|
# ? May 30, 2023 19:03 |
|
i don’t remember having chatgpt write a poem about steve ballmer
|
# ? May 30, 2023 19:04 |
|
post hole digger posted:i don’t remember having chatgpt write a poem about steve ballmer neither do i
|
# ? May 30, 2023 19:08 |
|
I read of someone using gpt to make meal plans and you can even ask it to only include seasonal foods etc. interesting and maybe good use of ai
|
# ? May 30, 2023 20:41 |
|
just asked it for a recipe for egg free pancakes and guess what it did
|
# ? May 30, 2023 20:45 |
|
mediaphage posted:thinkin bout that beef wet https://www.youtube.com/watch?v=uMcAagFNrPY
|
# ? May 30, 2023 20:50 |
|
echinopsis posted:just asked it for a recipe for egg free pancakes and guess what it did hmm. gonna go with it suggested you buy an egg at a retailer that is also offering free pancakes with egg purchase
|
# ? May 30, 2023 21:01 |
|
echinopsis posted:just asked it for a recipe for egg free pancakes and guess what it did told you that there are no eggs in pancakes, and then if there are they are in the first and seventh position.
|
# ? May 30, 2023 21:01 |
|
Oh yeah side note, the best use of chat GPT is to write glowing employee feedback anytime you interact with someone on help desk, retail, etc. and get a survey asking how they did and there's a space to "write more comments". Good feedback often means cash bonuses or at the least a favorable performance review, maybe raise, so help out the poor T1s while the job still exists.
|
# ? May 30, 2023 21:10 |
|
Beeftweeter posted:hmm. gonna go with it suggested you buy an egg at a retailer that is also offering free pancakes with egg purchase oh lol no it just gave me an egg free recipe
|
# ? May 30, 2023 21:12 |
|
Cybernetic Vermin posted:told you that there are no eggs in pancakes, and then if there are they are in the first and seventh position. i tried the ol' paradox with 'the number of words in this sentence' where i tried to feed it an answer that didn't match the sample sentence and, after the model identified it was contradictory, i asked how many words were in its explanation 11, it said there were 16 i'm pretty sure the model implicitly followed the broken reasoning i'd provided (incidentally, the true word count and the claimed word count were one more and one less than the paradox i used)
|
# ? May 30, 2023 21:39 |
|
i suspect that figuring it followed the incorrect reasoning provided is already overestimating the model workings. part of why the models struggle with this sort of thing because the input is provided tokenized with common subwords (turning "tokenized with common subwords" into something like " tok|en|ized| with| comm|on| sub|word|s|"), but if one believes the models exhibits emergent reasoning obviously the information about what characters are in those tokens *is* available in the training data. e.g. you can absolutely guide the reasoning based on the information encoded: and it'll do any word i could think of that way, but ultimately that's pretty much just supplying additional reasoning by prompt (i.e. the tokenization of "T-U-R-T-L-E" is " T|-|U|R|-|T|-|L|-|E", making the model state it and guiding it to look at it does the real work). you'd be able to train a model to do this specific task, but it is a matter of chasing small improvements, the mechanisms involved are not sufficient to allow arbitrary reasoning steps "internally" without training the model to spell them out.
|
# ? May 30, 2023 21:56 |
|
hmm, looking at it that way, meaningful words in the reply i got would get closer to the count provided. iiuc, depending on how the statement was tokenized it could count short in a literal sense but be consistent internally?
|
# ? May 30, 2023 22:09 |
|
lol, i was just playing around with essentially the same thing. you're right, you can guide how it responds but bing in particular doesn't like that very much! i was able to confuse it to the point that it thought it had ended the conversation, but it didn't, so all i got were blank responses past that
|
# ? May 30, 2023 22:10 |
|
Agile Vector posted:hmm, looking at it that way, meaningful words in the reply i got would get closer to the count provided. iiuc, depending on how the statement was tokenized it could count short in a literal sense but be consistent internally? possibly, but the fact that it is also trained with tokenizations that have a messy relationship with any such statement in the training data (i.e. sentences that talk about word counts) means that all such lexical counts might just be generally poorly represented internally. tbh we're immediately in interesting research asking these questions.
|
# ? May 30, 2023 22:16 |
|
regardless of however bing has implemented openai's model - they say it uses gpt4, but it answers stuff much more like 3/3.5 - gpt4 does way better with a lot of stuff including this question. you can make it check its own answer to some extent as part of the original question, and it fares much better than the other responses in this threadchatGPT v3.5 posted:The word "turtle" is spelled as follows: T-U-R-T-L-E. chatGPT v4 posted:The word "turtle" is spelled: T-U-R-T-L-E. yeah a lot of this stuff is going to be bad and problematic but it's clear that they're also probably going to get better over time - which may be worse in some ways because they'll be much more believed when they spit out some hallucination or another
|
# ? May 30, 2023 22:39 |
|
gpt and its ilk are just suped up ouija boards
|
# ? May 30, 2023 22:42 |
|
good for entertainment but otherwise very limited tools
|
# ? May 30, 2023 22:42 |
|
they're certainly fun to make fun of now but i really think if you can't think of any use cases besides hilarity for these llms you're suffering from a lack of imagination
|
# ? May 30, 2023 22:46 |
|
mediaphage posted:regardless of however bing has implemented openai's model - they say it uses gpt4, but it answers stuff much more like 3/3.5 - gpt4 does way better with a lot of stuff including this question. you can make it check its own answer to some extent as part of the original question, and it fares much better than the other responses in this thread not going to contradict the basic thrust of this, but the one addition i want to make is that the model is certainly fine-tuned to output things like "follow these steps:" and similar. one should not interpret this as the model "explaining" things, that fine-tuning is added to make the model hopefully break reasoning apart in much the way we poked bing above. i.e. get the model to output t-u-r-t-l-e to get the tokenization that helps, then in another step get the characters out of positions, etc. there's obvious limits to this, as the reasoning has to take a general shape existing in the training data, and it has to be deterministic in that if the first step is of a "either try x or y" kind of nature it goes off the rails as the beam search producing the statistically likely string will lock in on one with no reasoning whatsoever. there's some things to do to overcome this, but nothing that doesn't start to look like doing ai in the 80s as you go on.
|
# ? May 30, 2023 22:52 |
|
mediaphage posted:regardless of however bing has implemented openai's model - they say it uses gpt4, but it answers stuff much more like 3/3.5 - gpt4 does way better with a lot of stuff including this question. you can make it check its own answer to some extent as part of the original question, and it fares much better than the other responses in this thread if you ask it to explain its answer as a second question it will just keep correcting itself with a different error quote:In the word "turtle," the letter "t" appears in the following positions:
|
# ? May 30, 2023 22:53 |
|
i do certainly lack the capacity to imagine the value of generating vast quantities of grammatically accurate text with little to no relationship to any kind of reality, outside of seo and advertising if what it outputs needs to be convincing but inaccurate, it's fine. if you need anything else, an llm is not going to do the job assuming that this will somehow be "solved" is a lot like assuming "autonomous driving" will be solved in any way that doesn't involve redefining the term to match the actual capabilities of the tool it's an interesting technology in a very academic sense, because the practical applications of grammatically correct nonsense are fairly limited and you cannot ever guarantee that it will output anything else infernal machines fucked around with this message at 23:01 on May 30, 2023 |
# ? May 30, 2023 22:57 |
|
|
# ? May 29, 2024 01:19 |
|
fwiwquote:No, when counting the number of letters in a word, I use the original form of the word and not its tokenized form using Byte Pair Encoding (BPE). The tokenization process is used internally to help me understand and generate text, but it does not affect the way I count the number of letters in a word. Is there anything else you would like to know? 😊
|
# ? May 30, 2023 23:13 |