Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
goblin week
Jan 26, 2019

Absolute clown.
i think it's super cool we can use tech to do pointless things noone wanted

Adbot
ADBOT LOVES YOU

Cybernetic Vermin
Apr 18, 2005

doing pointless things does at least improve on what we usually use tech for

Pollyanna
Mar 5, 2005

Milk's on them.


NoneMoreNegative posted:

Thread:

https://twitter.com/vashikoo/status/1661939940719198212

The Photoshop AI stuff is all built around licensed & royalty-free images, no using artists work in the learning corpus without asking. (of course the PS tools wouldn't be here without the previous success of the less ethical forerunner tools, standing on the shoulders of pirates etc)

Also lol at the Reply Posters absolutely shrieking about this being the MURDER of CINEMA

that’s cool and all, but I want to describe to it in vague terms what is beyond the borders of the original image and have it fill in the blanks with that in mind

fart simpson
Jul 2, 2005

DEATH TO AMERICA
:xickos:

i skipped 5 pages. did i miss anything important

Pollyanna
Mar 5, 2005

Milk's on them.


no

fart simpson
Jul 2, 2005

DEATH TO AMERICA
:xickos:

thanks

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.

fart simpson posted:

i skipped 5 pages. did i miss anything important

ask chatgpt for a summary

Paladin
Nov 26, 2004
You lost today, kid. But that doesn't mean you have to like it.


infernal machines posted:

ask chatgpt for a summary

I had to use chatGPT to build a post retrieval system first to make copy pasting posts easier, then complain to it after it didn't understand how quotes work.

Paladin posted:

Now, please summarize the entire thread, including highlights of humorous moments and viewpoints of each user, but in a more conversational style, ideally mimicking the posting style of YOSPOS (but don't overdo it).

ChatGPT4 posted:

First off, we got Juul-Whip tossing around some high praise for a cloud-based application repository. Pretty standard tech hype. There's a joke or two hidden in there, but it's mostly just them pointing out that this stuff can be used in a ton of different ways.

Rotor takes the mic next, showing some classic YOSPOS cynicism about AI. Ain't fooled by fancy tech, thinks that we're already at a point where AI can write debug code. Good stuff.

Silver Alicorn chimes in, but the post is cut off, leaving us hanging. Who knows what gems could have been in the rest of that message.

Next up is Fart Simpson, trying out some classic AI tricks - getting it to write and then modify a poem about Steve Ballmer. Beefwetter actually wrote the poem, my bad on that mix-up earlier.

Modulo16 is coming in hot with some insider Microsoft tea. Claims Microsoft bought some shiny AI tech just to keep it away from Google, and now they're struggling to implement it.

SpaceAceJase is miffed about the message limit on chatGPT4 and is still on the fence about its quality. Also raises the interesting question of whether an AI bot account would be against the rules.

Then we got ProfessorMarvel playing with ChatGPT through the website interface. Apparently, the South Park episode has him thinking about copy/pasting text messages for ChatGPT responses.

Pmchem pops in to say we're still a good while away from bots taking over the world. Kinda comforting, ain't it?

Polyester Concept drops some deep thoughts about GPT models and their architectural limitations. Tries to play tic tac toe and hangman with chat gpt and... yeah, not so successful.

Cheque_some is on the edge of his seat waiting for peak AI-induced laziness, while ~Coxy is fantasizing about a future where all business communication is a verbose AI word salad. Good times.

Sweevo comes in with the mysterious one-word post. "Ban." drat, who pissed him off?

MrMoo gives us a few tidbits - something about an AI bus trial in Edinburgh, and another thing about Windows 95 chatgpt keys.

NoneMoreNegative talks about a tool that uses AI to create images for Photoshop and feels for commercial artists who might get hit by this. Also shares a bunch of links about AI, including some impressive AI images from the Demon Haunted World project.

Infernal Machines delivers a bit of a bleak outlook for artists in the era of AI, predicting a dip in quality for commercial work. Also shares some interesting insights about AI glasses and the future of AI in music.

And that's the thread! A bunch of tech-savvy folks talking shop about AI, tossing around jokes and thoughts. Some are hyped, some are skeptical, but they're all just trying to make sense of the AI future we're all hurtling towards.

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.

Paladin posted:

Also shares some interesting insights about AI glasses and the future of AI in music.

pretty good, it wrote two lines about my posts and one of them is objectively wrong

mediaphage
Mar 22, 2007

Excuse me, pardon me, sheer perfection coming through
https://twitter.com/Liv_Lanes/status/1663335430702481409?s=20

(also my kingdom for an E0)

mediaphage
Mar 22, 2007

Excuse me, pardon me, sheer perfection coming through

infernal machines posted:

objectively wrong

idk sounds like your posts all right

Beeftweeter
Jun 28, 2005

OFFICIAL #1 GNOME FAN

Paladin posted:

I had to use chatGPT to build a post retrieval system first to make copy pasting posts easier, then complain to it after it didn't understand how quotes work.

lol. it got my username wrong somehow and also the content of the post

mediaphage
Mar 22, 2007

Excuse me, pardon me, sheer perfection coming through

Beeftweeter posted:

lol. it got my username wrong somehow and also the content of the post

i think we all know beefwetter is the superior username and actually you're wrong if you dont change it

Paladin
Nov 26, 2004
You lost today, kid. But that doesn't mean you have to like it.


Beeftweeter posted:

lol. it got my username wrong somehow and also the content of the post

I think I made a typo and it just went along with it.

And yes I have read your username as Beefwetter for years because

mediaphage posted:

i think we all know beefwetter is the superior username and actually you're wrong if you dont change it

mediaphage
Mar 22, 2007

Excuse me, pardon me, sheer perfection coming through
thinkin bout that beef wet

Beeftweeter
Jun 28, 2005

OFFICIAL #1 GNOME FAN

mediaphage posted:

i think we all know beefwetter is the superior username and actually you're wrong if you dont change it

hmm. we could do with a new namechange thread

fart simpson
Jul 2, 2005

DEATH TO AMERICA
:xickos:

i don’t remember having chatgpt write a poem about steve ballmer

fart simpson
Jul 2, 2005

DEATH TO AMERICA
:xickos:

i don’t remember having chatgpt write a poem about steve ballmer

post hole digger
Mar 21, 2011

i don’t remember having chatgpt write a poem about steve ballmer

fart simpson
Jul 2, 2005

DEATH TO AMERICA
:xickos:

post hole digger posted:

i don’t remember having chatgpt write a poem about steve ballmer

neither do i

echinopsis
Apr 13, 2004

by Fluffdaddy
I read of someone using gpt to make meal plans and you can even ask it to only include seasonal foods etc.

interesting and maybe good use of ai

echinopsis
Apr 13, 2004

by Fluffdaddy
just asked it for a recipe for egg free pancakes and guess what it did

Agile Vector
May 21, 2007

scrum bored



mediaphage posted:

thinkin bout that beef wet

https://www.youtube.com/watch?v=uMcAagFNrPY

Beeftweeter
Jun 28, 2005

OFFICIAL #1 GNOME FAN

echinopsis posted:

just asked it for a recipe for egg free pancakes and guess what it did

hmm. gonna go with it suggested you buy an egg at a retailer that is also offering free pancakes with egg purchase

Cybernetic Vermin
Apr 18, 2005

echinopsis posted:

just asked it for a recipe for egg free pancakes and guess what it did

told you that there are no eggs in pancakes, and then if there are they are in the first and seventh position.

Paladin
Nov 26, 2004
You lost today, kid. But that doesn't mean you have to like it.


Oh yeah side note, the best use of chat GPT is to write glowing employee feedback anytime you interact with someone on help desk, retail, etc. and get a survey asking how they did and there's a space to "write more comments".

Good feedback often means cash bonuses or at the least a favorable performance review, maybe raise, so help out the poor T1s while the job still exists.

echinopsis
Apr 13, 2004

by Fluffdaddy

Beeftweeter posted:

hmm. gonna go with it suggested you buy an egg at a retailer that is also offering free pancakes with egg purchase

oh lol no it just gave me an egg free recipe

Agile Vector
May 21, 2007

scrum bored



Cybernetic Vermin posted:

told you that there are no eggs in pancakes, and then if there are they are in the first and seventh position.

i tried the ol' paradox with 'the number of words in this sentence' where i tried to feed it an answer that didn't match the sample sentence and, after the model identified it was contradictory, i asked how many words were in its explanation

11, it said

there were 16

i'm pretty sure the model implicitly followed the broken reasoning i'd provided (incidentally, the true word count and the claimed word count were one more and one less than the paradox i used)

Cybernetic Vermin
Apr 18, 2005

i suspect that figuring it followed the incorrect reasoning provided is already overestimating the model workings.

part of why the models struggle with this sort of thing because the input is provided tokenized with common subwords (turning "tokenized with common subwords" into something like " tok|en|ized| with| comm|on| sub|word|s|"), but if one believes the models exhibits emergent reasoning obviously the information about what characters are in those tokens *is* available in the training data. e.g. you can absolutely guide the reasoning based on the information encoded:



and it'll do any word i could think of that way, but ultimately that's pretty much just supplying additional reasoning by prompt (i.e. the tokenization of "T-U-R-T-L-E" is " T|-|U|R|-|T|-|L|-|E", making the model state it and guiding it to look at it does the real work). you'd be able to train a model to do this specific task, but it is a matter of chasing small improvements, the mechanisms involved are not sufficient to allow arbitrary reasoning steps "internally" without training the model to spell them out.

Agile Vector
May 21, 2007

scrum bored



hmm, looking at it that way, meaningful words in the reply i got would get closer to the count provided. iiuc, depending on how the statement was tokenized it could count short in a literal sense but be consistent internally?

Beeftweeter
Jun 28, 2005

OFFICIAL #1 GNOME FAN
lol, i was just playing around with essentially the same thing. you're right, you can guide how it responds but bing in particular doesn't like that very much! i was able to confuse it to the point that it thought it had ended the conversation, but it didn't, so all i got were blank responses past that

Cybernetic Vermin
Apr 18, 2005

Agile Vector posted:

hmm, looking at it that way, meaningful words in the reply i got would get closer to the count provided. iiuc, depending on how the statement was tokenized it could count short in a literal sense but be consistent internally?

possibly, but the fact that it is also trained with tokenizations that have a messy relationship with any such statement in the training data (i.e. sentences that talk about word counts) means that all such lexical counts might just be generally poorly represented internally.

tbh we're immediately in interesting research asking these questions.

mediaphage
Mar 22, 2007

Excuse me, pardon me, sheer perfection coming through
regardless of however bing has implemented openai's model - they say it uses gpt4, but it answers stuff much more like 3/3.5 - gpt4 does way better with a lot of stuff including this question. you can make it check its own answer to some extent as part of the original question, and it fares much better than the other responses in this thread

chatGPT v3.5 posted:

The word "turtle" is spelled as follows: T-U-R-T-L-E.

To determine the number of "T's" in the word "turtle," we can follow these steps:

Analyze each letter in the word "turtle" one by one.
Count the occurrences of the letter "T" as we go through the word.
Going through the word "turtle," we find that there is one occurrence of the letter "T." Therefore, the number of "T's" in the word "turtle" is 1.

Now, let's review our conversation to check for correctness:

In our conversation, you asked me to spell the word "turtle" and then determine the number of "T's" in that word. I spelled the word as "T-U-R-T-L-E" and concluded that there is one "T" in the word "turtle." Upon reviewing our conversation, the information provided is correct.

Therefore, the final answer is that there is one "T" in the word "turtle."

chatGPT v4 posted:

The word "turtle" is spelled: T-U-R-T-L-E.

Now, let's count how many T's there are in this word.

The first step is to identify each individual letter. So, for "turtle" we have:

  1. T
  2. U
  3. R
  4. T
  5. L
  6. E

Next, we look at each letter in turn and check whether it is a 'T'.

Looking at the first letter, we see it is a 'T', so we count 1.

The second letter is 'U', which is not a 'T', so our count remains 1.

The third letter is 'R', which is also not a 'T', so our count is still 1.

The fourth letter is another 'T', so we add that to our count, making it 2.

The fifth and sixth letters, 'L' and 'E', are not 'T's, so our count remains at 2.

Therefore, there are 2 'T's in the word "turtle".

Let's now check our conversation to ensure correctness. Looking at our discussion:

The word was spelled correctly as T-U-R-T-L-E.
The process for determining the number of 'T's was explained thoroughly and accurately.
The count was performed correctly, finding 2 'T's in the word "turtle".
Given that the steps followed were correct and the count is accurate, the final answer is:

There are 2 T's in the word "turtle".

yeah a lot of this stuff is going to be bad and problematic but it's clear that they're also probably going to get better over time - which may be worse in some ways because they'll be much more believed when they spit out some hallucination or another

Pollyanna
Mar 5, 2005

Milk's on them.


gpt and its ilk are just suped up ouija boards

Pollyanna
Mar 5, 2005

Milk's on them.


good for entertainment but otherwise very limited tools

mediaphage
Mar 22, 2007

Excuse me, pardon me, sheer perfection coming through
they're certainly fun to make fun of now but i really think if you can't think of any use cases besides hilarity for these llms you're suffering from a lack of imagination

Cybernetic Vermin
Apr 18, 2005

mediaphage posted:

regardless of however bing has implemented openai's model - they say it uses gpt4, but it answers stuff much more like 3/3.5 - gpt4 does way better with a lot of stuff including this question. you can make it check its own answer to some extent as part of the original question, and it fares much better than the other responses in this thread

yeah a lot of this stuff is going to be bad and problematic but it's clear that they're also probably going to get better over time - which may be worse in some ways because they'll be much more believed when they spit out some hallucination or another

not going to contradict the basic thrust of this, but the one addition i want to make is that the model is certainly fine-tuned to output things like "follow these steps:" and similar. one should not interpret this as the model "explaining" things, that fine-tuning is added to make the model hopefully break reasoning apart in much the way we poked bing above. i.e. get the model to output t-u-r-t-l-e to get the tokenization that helps, then in another step get the characters out of positions, etc.

there's obvious limits to this, as the reasoning has to take a general shape existing in the training data, and it has to be deterministic in that if the first step is of a "either try x or y" kind of nature it goes off the rails as the beam search producing the statistically likely string will lock in on one with no reasoning whatsoever. there's some things to do to overcome this, but nothing that doesn't start to look like doing ai in the 80s as you go on.

Chalks
Sep 30, 2009

mediaphage posted:

regardless of however bing has implemented openai's model - they say it uses gpt4, but it answers stuff much more like 3/3.5 - gpt4 does way better with a lot of stuff including this question. you can make it check its own answer to some extent as part of the original question, and it fares much better than the other responses in this thread

yeah a lot of this stuff is going to be bad and problematic but it's clear that they're also probably going to get better over time - which may be worse in some ways because they'll be much more believed when they spit out some hallucination or another

if you ask it to explain its answer as a second question it will just keep correcting itself with a different error

quote:

In the word "turtle," the letter "t" appears in the following positions:

At the second position (index 1).
At the sixth position (index 5).

> explain how you came to this answer

Apologies for the confusion in my previous response. Upon reviewing it, I realized that there was an error. Let me correct it.

In the word "turtle," the letter "t" appears at the following positions:

At the second position (index 1).
At the fifth position (index 4).
At the sixth position (index 5).

To determine the positions, I analyzed the word "turtle" and counted each occurrence of the letter "t" from left to right.

> explain how you came to this answer

I apologize for the incorrect response in my previous clarification. Upon reviewing it once again, I realized that there was still an error. I apologize for the confusion caused.

To accurately determine the positions where the letter "t" appears in the word "turtle," let's go through the word step by step:

"T" - The first letter in the word is not a "t."

"u" - The second letter in the word is not a "t."

"r" - The third letter in the word is not a "t."

"t" - The fourth letter in the word is indeed a "t." This is the first occurrence of the letter "t" in the word "turtle." It appears at position 4 (index 3).

"l" - The fifth letter in the word is not a "t."

"e" - The sixth letter in the word is not a "t."

Therefore, the letter "t" appears only once in the word "turtle," specifically at the fourth position (index 3) in the word.

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.
i do certainly lack the capacity to imagine the value of generating vast quantities of grammatically accurate text with little to no relationship to any kind of reality, outside of seo and advertising

if what it outputs needs to be convincing but inaccurate, it's fine. if you need anything else, an llm is not going to do the job

assuming that this will somehow be "solved" is a lot like assuming "autonomous driving" will be solved in any way that doesn't involve redefining the term to match the actual capabilities of the tool

it's an interesting technology in a very academic sense, because the practical applications of grammatically correct nonsense are fairly limited and you cannot ever guarantee that it will output anything else

infernal machines fucked around with this message at 23:01 on May 30, 2023

Adbot
ADBOT LOVES YOU

Beeftweeter
Jun 28, 2005

OFFICIAL #1 GNOME FAN
fwiw

quote:

No, when counting the number of letters in a word, I use the original form of the word and not its tokenized form using Byte Pair Encoding (BPE). The tokenization process is used internally to help me understand and generate text, but it does not affect the way I count the number of letters in a word. Is there anything else you would like to know? 😊

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply