Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
XYZAB
Jun 29, 2003

HNNNNNGG!!
Every time I type out "AI" and the I doesn't have serifs:

Adbot
ADBOT LOVES YOU

Seph
Jul 12, 2004

Please look at this photo every time you support or defend war crimes. Thank you.
I've been thinking about the rapid advancement of genAI in the past couple of years, and self driving vehicles popped in my head as an interesting comparison point.

I remember back around 2015 when the self driving hype was getting started - it seemed like every couple of months there was a new update with new features getting announced. Within a year or so Teslas were able to drive on the freeway mostly by themselves. It seemed like a bunch of people - not just Musk - were expecting fully autonomous level 5 vehicles within the next few years. Then around 2019 progress hit a wall and it's been mostly minor incremental improvements since then. We're barely at level 3 self driving and level 5 seems decades away.

I'm not saying genAI will necessarily follow the same trajectory, but it made me think if it's possible to know where in the development curve we are with it? Are all the recent advancements just the tip of the iceberg, or is it possible we're getting to the point where we hit diminishing returns (whether that's on compute time, development time, or training data)?

Lucid Dream
Feb 4, 2003

That boy ain't right.

Seph posted:

I'm not saying genAI will necessarily follow the same trajectory, but it made me think if it's possible to know where in the development curve we are with it? Are all the recent advancements just the tip of the iceberg, or is it possible we're getting to the point where we hit diminishing returns (whether that's on compute time, development time, or training data)?

Can't say yet, still too early. GPT5 will be a bellwether. Over the last few weeks I've seen several significant "hard" problems get solved in the AI space, and the improvements are still happening too rapidly to say where things might land.

Thoom
Jan 12, 2004

LUIGI SMASH!
Waymo is level 4, no? And from what I've heard through the grapevine, the main barrier to their expansion at this point isn't the self-driving tech itself, but the economics of the cars, which require expensive nightly maintenance.

BrainDance
May 8, 2007

Disco all night long!

Seph posted:

I'm not saying genAI will necessarily follow the same trajectory, but it made me think if it's possible to know where in the development curve we are with it? Are all the recent advancements just the tip of the iceberg, or is it possible we're getting to the point where we hit diminishing returns (whether that's on compute time, development time, or training data)?

Yeah, it's really hard to say. We know there is at least a decent amount of improvement likely left to make with our current technology, it would be very surprising to me if we found out "oh, wow, we accidentally have hit the optimal way to train transformers, this is about as good as it gets." That and there have been a lot of things that look like they might be big improvements just popping up all the time that absolutely haven't been fully explored yet, and might be big deals. There are some approaches that aren't feasible now but will become feasible with even more powerful hardware, what we're doing at the peak of it all right now is basically just pushing the limits of the hardware we have. I always think about what this would all be like if we had the idea and understanding of current AIs but 20+ years ago when the hardware just wasn't there, like if Attention is All You Need was published in 98, would we have sat around waiting, or done something with super small models? Are we kinda waiting right now? I think so. Like, I think we can take the whole mixture of experts thing a lot further than we have. I don't think it's a coincidence that the models we're using right now completely require basically the limits of the hardware we have.

The big optimizations we've found that allow bigger models to run on weaker hardware, that doesn't just benefit people at home running models on their 3090, that also means we can potentially run even more powerful, hardware intensive models on the huge, 8 GPU datacenter stuff, too. This is why I wish we knew more about GPT4, is it what it is right now because even they don't have the hardware for what it could be?

Though at the same time I think most people do believe that with transformers there is a limit. There's probably a size out there where just making the model bigger stops mattering all that much, or where you cant make it better at something with better training data on that thing. But, there was research on alternative methods and improvements 5+ years ago too. Now there's a whole lot of research. Tons of people working on finding a better way, even new approaches that weren't doing that before, since AI has everyone's attention now. We aren't living in the environment that led to the discovery of transformers now, but something much bigger, so I actually think it's very unlikely we're not gonna find some huge advancements and completely new approaches much faster than we were finding them before. There's so much money being thrown at it now.

If it all has some really hard line to cross where we just cant figure out some big problems even throwing tons of research money at it, I don't see any reason to think we're actually that close to it. Like, so far from it that what is possible with just what we have now wont look all that similar to what we have now.

I think diffusion models are even further from what's possible with them. That's just my feeling looking at how we're using them now and the progress we've made.

So I think there is a limit, I don't think we're close to it.

Where I think we're still really far behind with just the models and technology we have though is implementation. There's a ton of stuff that could be done with what we have, but no ones gone and implemented it yet or even thought of it, we haven't had as much time with the current technology to do it since this all still really only blew up a short time ago. We're still mostly treating it like a toy with some implementations just using it in the most obvious way. Even with transformers, I suspect it's useful for more than just chatbots, natural language search engines, and lovely automated customer support. If hardware gets to the point where it's easy to finetune very large models for whatever specific purpose you have in mind and run them then we'll find a lot more uses for them, and I don't see hardware just reaching its limit any time soon.

I would like to see an extremely large task master get trained, and see how it does at whatever that one thing it does is. If that was cheap I think it'd be tried more instead of these general models that get finetuned on top.

Lucid Dream
Feb 4, 2003

That boy ain't right.

BrainDance posted:

Like, I think we can take the whole mixture of experts thing a lot further than we have.

My baseless speculation is that the rumored Q* breakthrough at OpenAI is a way to navigate a higher dimensional mixture of experts or something.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
It's a good point though. People constantly assume "it's only going to get better" based on basically nothing but it's just something we have gotten used to with technology.
Lots of technologies hit a plateau far faster than people thought they would or the advances get priced out of usefulness.

GenAI has advanced far quicker than I think most experts predicted but it could also just as quickly hit a dead end and stall out for years, it did before until neural networks came along and kick started it back into high gear.

With all the various directions research is going in and the vast amounts of money being poured in, this cycle probably has legs still but who the hell knows.

Mega Comrade fucked around with this message at 10:12 on Feb 24, 2024

SaTaMaS
Apr 18, 2003

Thoom posted:

Waymo is level 4, no? And from what I've heard through the grapevine, the main barrier to their expansion at this point isn't the self-driving tech itself, but the economics of the cars, which require expensive nightly maintenance.

Waymo requires pre-mapping with lidar, which limits where the cars can go.

Cicero
Dec 17, 2003

Jumpjet, melta, jumpjet. Repeat for ten minutes or until victory is assured.
Waymo has to very extensively test in each new area to handle whatever edge cases may be specific to a new metro (though at least in theory, they should have to do less of this for each new area). They're clearly fairly conservative if you compare them to the rivals they've had; when both Cruise and Waymo were rolling out in SF, the self driving cars subreddit got way more posts about randomly stuck Cruise cars than Waymos, but that didn't seem to stop Cruise from rapidly announcing new expansion plans. Waymo still has some things they haven't fully tackled yet too, like freeways and (most?) bad weather.

022424
Feb 25, 2024

SaTaMaS posted:

which requires pre-mapping with lidar

our bourgi-controlled society is sleeping at the wheel on stuff like this

Staluigi
Jun 22, 2021

thermodynamics cheated
from now on i will refer to all AI hype and marketing practices as "that willy wonka poo poo"















at least this event got people fuckin talkin about AI driven advertising in the actually correct terms: "how this gonna get used to hose us over"

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20
From https://willyschocolateexperience.com/index.html

quote:

Any resemblance to any character, fictitious or living, is purely coincidental.
This experience is in no way related to the Wonka franchise, which is owned by the Warner Bros. company.

At least this part was true :v:

BoldFace
Feb 28, 2011
https://twitter.com/bene25_/status/1762631362597519859

Main Paineframe
Oct 27, 2010

The press managed to find and interview this poor oompa loompa too.

https://twitter.com/davidmackau/status/1762981623115465156

highlights:
  • the actors only got the script the night before the event, and they suspected that it was AI-written, but they'd all already signed contracts by that point

  • the photo was taken after the organizers told her to drop the script and just wing it. which was also about the same time that the "Jellybean Room" ran out of jelly beans, leaving the actors with no candy to give to the kids

  • many of the other actors had already walked off the set by that point. she left shortly afterwards as well

  • the actors were first attracted to the job because it promised pretty decent pay for a couple days' work. but so far none of the actors have been paid

  • she understands why it went viral but she's not super happy about it, and she hopes that being famous as the "meth head oompa loompa" isn't going to hurt her children's entertainment career

https://www.vulture.com/article/glasgow-sad-oompa-loompa-interview.html

quote:

The internet loves a fiasco. Whether it be 2017’s infamous Fyre Festival, 2014’s sad ball pit at Dashcon, or last year’s muddy hike for freedom at Burning Man, we love to marvel at events that make big promises but flop spectacularly. It’s the online equivalent of slowing down in your car to look at a giant wreck.

Enter February 24’s disastrous Willy Wonka chocolate factory experience in Glasgow, Scotland, which instantly became a viral sensation when pictures emerged of the sad spectacle. Organized by House of Illuminati, whose AI-heavy website promotes the company as a place where “fantasy and reality converge,” the event was advertised as an immersive experience for families themed around the classic Roald Dahl novel and movie franchise (and with tickets that cost up to $44). But the end result was anything but magical.

The event took place in a largely empty warehouse/venue that had been decorated, so to speak, with a few scattered candy-themed props, hanging plastic backgrounds, and a small jumping castle. As bewildered families wandered the space, they were greeted by actors dressed in shoddy costumes who read from scripts they later said they suspected were “AI-generated gibberish.” Rather than feasting on an array of sweets, children were given a few jellybeans and a quarter cup of lemonade. Soon, angry parents called the police as they demanded refunds from organizers, who canceled the event before the day was out.

While many photos have emerged of the Wonka fiasco, one picture in particular has grabbed the internet’s attention. It shows an actor wearing the Shein equivalent of an Oompa Loompa costume and looking slightly dead in the eyes as she stands in a smoky room behind a table covered in so much scientific equipment that countless people online compared it to a “meth lab.”

So who was she? Meet Kirsty Paterson.

While a subsequent photo shows there was at least one other Oompa Loompa at the table that day, it’s Paterson who has emerged as the viral star and defining image of the Wonka fiasco. X posts have proclaimed her as one of the standout memes of 2024 so far. People are comparing the image to works of art or making art of their own showing Paterson as the Mona Lisa. Others are wondering if she is really the missing Kate Middleton in disguise. Many are predicting, with good reason, that she’ll be among the Halloween costumes you see this year.

Paterson, who first spoke with the Daily Mail about her experience with the event, opened up in an interview on Wednesday about what it’s been like to go viral for one of the most mortifying days of her life, and for a picture, she stresses, does not actually resemble her in real life.

Tell me about yourself. What do you do when you’re not dressed as an Oompa Loompa?
I’m 29. I want to act more in children’s entertainment because I enjoy it. I’ve got a lot of energy, and I like being around kids. But it’s more of a side hustle. I’m also a fire dancer, and I’m just trying to build up my yoga teaching at the moment.

How did you get roped into this event? Was it listed somewhere?
It was listed on Indeed. I don’t normally get my acting jobs through Indeed, but I just thought, All right. To be honest, I was a wee bit skeptical, because it was not through an agency. They were offering £500 for two days of work, so I decided to go.

You went to the warehouse on Friday for a dress rehearsal. What happened at that meeting?
I was shocked, to be honest with you. I wasn’t expecting it to be like that. It did seem like there was a production going, but to me it wasn’t a finished production — just the start. It was the first time me and the other actors met. I’ve never had a script the night before ever in my life, so when I got the script the night before I was like, Oh, this is not …

They kept going on about how you could just improvise. I was skeptical of them saying that, too, because if you’d written a script, then you’d probably pride yourself on what you’d written, right? You wouldn’t want people to improvise! I kind of thought it was AI-generated, but by this point I’d signed the contract.

Were you shown the costumes that day? The set?
This is the mental thing about the costumes! Given the amount people were paying to go and the amount they were paying us, I thought they’d have sufficient costumes. So at this point, I was like, “What is it you want us to wear? What’s the makeup like?” Because it’s a Willy Wonka experience, you need to have good costumes, because it’s all meant to be a bit imaginative. But we didn’t know what we were wearing at that point.

They said they were going to be working through the night or whatever, so I just assumed it was going to be a lot better.

What happened when you turned up on Saturday?
It was the exact same! I was like, I don’t know if I actually want to do this. But I’d signed the contract, and part of me didn’t want to disappoint the kids going to this. Honestly, it was bad enough. I knew it was shocking, but I know I’m good at what I do, so I was like, If I can bring a wee bit of something good to this …

At this point, they gave us the costumes. They were so inappropriate for what it was. It was just strange, and they were really cheaply made. It was almost like they were secondhand.

Were you wearing orange body paint in that photo? I couldn’t tell.
No! They didn’t have any makeup or anything. It’s something I’ve never experienced ever in my life.

By this point, I’m judging myself for letting this go on as long as it did. But we got the costumes, and I started seeing the kids coming in, and they were all dressed up, and I was just feeling so guilty.

Tell me about that moment as the kids started coming in. Did you see the innocence leave their eyes?
They were quite upset. I think they were confused. But it was more the parents. I just looked at them, and I think they must have known when they looked at me. An older couple said to me, “I really, really hope you get paid well for this.” And I was honest and I said, “I’m this close to walking out. This is not what I signed up for.” But I didn’t want to let the people around me down. The actors I was working with are amazing people, and this has got nothing to do with them. So I just thought, I’m going to make the best of this.

When the first round of kids came in … Don’t get me wrong, it’s still an incredibly, shockingly bad set, but we did our lines and everything well. We just had a bit of fun with it. I don’t know how else you can put sprinkles on poo poo, but we were trying to be the sprinkles on poo poo.

I was going around and feeling really embarrassed. After we did it the first time, the organizers were like, “Just abandon the script and let the guests walk through.”

When you say “walk through” — how long are we talking here to walk through this event?
I’m telling you this would have taken about two minutes. I’ve never experienced anything like this in my life.

Let’s talk about this photo that’s gone viral. What are you doing at this moment here?
It doesn’t even look like me! I was thinking when it came out initially, Oh, people aren’t going to know it looks like me. Maybe I can get away with this. But no, it went completely global.

In the exact moment of the photo, they’d told us to abandon the script. They had this “Jellybean Room,” but they eventually ran out of jelly beans. I was already rationing the jelly beans to three per kid, and that was me being generous. I wanted to give the kids all the candy. So we had no jelly beans, and people were coming up to me. It was just humiliating. I was starting to get angry. The other Oompa Loompa came over at this point and I went, “Where is everyone?! Why am I left here on my own?! Where is everyone else?!” You know how they talk about “me contemplating my life”? This is me contemplating my life.

I’m laughing about it now, but I was so angry for the kids and the parents. I know people spent a lot of money coming here. It’s a disservice to what I do. Eventually, I just walked off. I was like, “I’m done.” But someone got a picture of me.

Are you aware just how viral this picture is?
I’m not going to lie or sugarcoat this: This has been quite a lot for me. I find it funny and I can make a humorous joke about it, but the flip side of this is that this is embarrassing for what I do, and I hope this doesn’t tarnish that. It’s as if it’s been edited. Obviously, because it’s such an ugly photo, people were commenting on it and saying I look ugly or like a meth head. I found the negative comments really hard, but I do see the funny side of it. I know I’m all-right looking.

I suppose that’s good it doesn’t resemble you.
Me and all my close friends recognized me! And my friends know I’m the only one who would do something as daft as this.

To be honest, I don’t know how viral this has gone. I turned off my phone for a day because I was like, Right, this is too much. My phone was going off constantly, and I didn’t know how to navigate it.

So how big has this gone? Because this is the first interview I’ve done where I’ve really spoken about it.

It is very viral. I’m not going to lie. But a lot of pictures from the event have gone viral.
I don’t blame it for going viral. I just hope everyone gets their money back, because I’ve not been paid for this. None of the actors have. So it’s like we’re kind of going through all this for nothing.

I just need to see the funny side of it. Anyone who knows me knows I can take a joke. I just didn’t expect it to go this wild.

Have you seen the people saying they’re going to dress up like you for Halloween?
No, but if you want to, you can!

Electric Wrigglies
Feb 6, 2015

Good on her and thanks for humanizing her. Hopefully she goes onto bigger and better things!

Quixzlizx
Jan 7, 2007
Is it just me, or is this less of an "AI grifting story" and more of a "grifters who happened to use AI" story? Like, maybe it lowered the amount of effort required, but all of the grifting elements could've easily been done before ChatGPT existed. They would've had to do a GIS/Pinterest search for the picture instead of entering an AI prompt.

mobby_6kl
Aug 9, 2009

by Fluffdaddy

Quixzlizx posted:

Is it just me, or is this less of an "AI grifting story" and more of a "grifters who happened to use AI" story? Like, maybe it lowered the amount of effort required, but all of the grifting elements could've easily been done before ChatGPT existed. They would've had to do a GIS/Pinterest search for the picture instead of entering an AI prompt.
No that's right.

That said the generative AI stuff will make it easier and cheaper. You could GIS but you'd need to find a bunch of images that show what you want to grift, are consistent, are not recognizably an existing thing, or reverse-searchable. One could also subtly or not so subtly enhance images of the real location/product so when people do show up, it's vaguely similar to what they expected, just a bit (much) shittier.

Same with the scripts, you could write that stuff yourself or steal it somewhere of course, but you could more easily generate the specific scripts you need by asking ChatGPT.

Reveilled
Apr 19, 2007

Take up your rifles

mobby_6kl posted:

No that's right.

That said the generative AI stuff will make it easier and cheaper. You could GIS but you'd need to find a bunch of images that show what you want to grift, are consistent, are not recognizably an existing thing, or reverse-searchable. One could also subtly or not so subtly enhance images of the real location/product so when people do show up, it's vaguely similar to what they expected, just a bit (much) shittier.

Same with the scripts, you could write that stuff yourself or steal it somewhere of course, but you could more easily generate the specific scripts you need by asking ChatGPT.

To be honest you don’t necessarily need to pass those hurdles, local social media had identified this as a grift in the weeks leading up to it and even had the name of the organiser (who has previous form for it), but it still sold tickets. If they’d used pictures from the film and not even bothered with a script I doubt it would have made much of a difference.

I imagine that for the vast, vast majority of customers what convinced them the event was legit was that one of our local event ticketing websites was selling tickets. Whats On Glasgow don’t really do any validation of the events they list, but the people who bought these tickets probably assumed the opposite. By the time they ended up on the website it was probably after seeing the listing on WhatsOn, and so they were primed to assume it was real. And I imagine a lot of them never even cared to look up the event’s website.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Quixzlizx posted:

Is it just me, or is this less of an "AI grifting story" and more of a "grifters who happened to use AI" story? Like, maybe it lowered the amount of effort required, but all of the grifting elements could've easily been done before ChatGPT existed. They would've had to do a GIS/Pinterest search for the picture instead of entering an AI prompt.
Yeah, all the images and everything were obviously not photos to anyone with half a brain. I don’t thing anyone showed up expecting it to look like that - they just naturally expected it wouldn’t be so out-of-this-world grim and half-assed.

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

cat botherer posted:

Yeah, all the images and everything were obviously not photos to anyone with half a brain. I don’t thing anyone showed up expecting it to look like that - they just naturally expected it wouldn’t be so out-of-this-world grim and half-assed.

Except... now they are and want it reopened :confused:
https://www.scotsman.com/news/peopl...outrage-4539528



For a bit more content, here are some pictures of the actress playing the Oompa Loompa and trying to make the best of the situation.
https://imgur.com/gallery/i6FQOaK
It has to be absolutely awful to have the entire internet crash down on top of you all at once like that. Not one bit of this was the any of the actors fault so I'm glad to see people pushing back.


More content: The actual 15 page script that was supposed to be read by the actors.
https://i.dailymail.co.uk/i/pix/2024/02/27/willys_chocolate_experience.pdf
I've read this and some of this truly is... magical.

I give it even odds that someone reworks this and it ends up in an Off-Broadway production.
https://www.themarysue.com/karen-gillan-scottish-willy-wonka-experience/
Better than even odds that it ends up on Broadway.

Tippecanoe
Jan 26, 2011

I enjoyed this line about the Jelly Beans That Make You Horny:

quote:

Willy McDuff: And let's not forget our secret inventions—the soup-flavored jelly
beans designed to keep the wee ones clean, hot and spicy beans that... (lowers his
voice) attract the birds. (Winks) That’s a story for another day, or perhaps a question
for your parents.
(The audience chuckles, appreciating the playful innuendo.)

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆

Kagrenak posted:

My understanding is the training datatset for GitHub copilot is made up entirely of permissively licensed source code. The lawsuits won't affect that product in a direct way and I highly doubt MS is going to go bankrupt over them.

This is definitely untrue, the very first thing that went viral with copilot was using it to generate the infamous "fast inverse square root" function from Quake III, which is open source nowadays but is licensed under the very restrictive GPL2 license.

A business that included this function in their product could easily be sued for copyright violation unless they complied with the GPL2 terms, including things like a requirement to open source their own code and allowing others to redistribute it for free.

https://twitter.com/StefanKarpinski/status/1410971061181681674

Note that it won't generate this any more but only because this particular code snippet was so famous that Microsoft explicitly blacklisted it. There's still the potential for it to regurgitate other, less-recognizable code from its dataset that's also copyrighted under restrictive licenses.

RPATDO_LAMD fucked around with this message at 06:58 on Mar 4, 2024

Electric Wrigglies
Feb 6, 2015

Well I guess AI should be waaay more effective at rooting out or identifying IP infringement then current methods.

Waffle House
Oct 27, 2004

You follow the path
fitting into an infinite pattern.

Yours to manipulate, to destroy and rebuild.

Now, in the quantum moment
before the closure
when all become one.

One moment left.
One point of space and time.

I know who you are.

You are Destiny.


RPATDO_LAMD posted:

This is definitely untrue, the very first thing that went viral with copilot was using it to generate the infamous "fast inverse square root" function from Quake III, which is open source nowadays but is licensed under the very restrictive GPL2 license.

A business that included this function in their product could easily be sued for copyright violation unless they complied with the GPL2 terms, including things like a requirement to open source their own code and allowing others to redistribute it for free.

https://twitter.com/StefanKarpinski/status/1410971061181681674

Note that it won't generate this any more but only because this particular code snippet was so famous that Microsoft explicitly blacklisted it. There's still the potential for it to regurgitate other, less-recognizable code from its dataset that's also copyrighted under restrictive licenses.

lol ontology

Copilot just like look man, someone gave me proper justification and everything, I see nothing wrong here.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
I finally had a reason to try ChatGPT for code. Told it to replace a function that printed sensor values over serial with one that saves it in a database using the SimplePgSQL library and it came up with this:
C++ code:
void store_temperature(float temperature){
  SimplePgSQL pgsql;
  pgsql.setHost("192.168.0.1");
  pgsql.setUser("temp_user");
  pgsql.setPassword("Pass123");
  pgsql.setDatabase("temp_log");

  if(!pgsql.connect()) {
    ESP_LOGE(TAG, "Failed to connect to PostgreSQL server");
    return;
  }
  
  char query[100];
  snprintf(query, sizeof(query), "INSERT INTO temps (temperature) VALUES (%.2f)", temperature);
  
  if(!pgsql.query(query)) {
    ESP_LOGE(TAG, "Failed to execute SQL query");
    pgsql.close();
    return;
  }
  pgsql.close();
}
Looks about right, yes? Bonus points for safe(r) version of sprintf.

The problem is that this isn't how the library works, like at all. Looking at the example, it's supposed to be configured like this:
C++ code:
WiFiClient client;
char buffer[1024];
PGconnection conn(&client, 0, 1024, buffer);

conn.setDbLogin(PGIP,
    user,
    passwordDB,
    database,
    "utf8");

if (conn.status() == CONNECTION_OK) {
 //stuff 
}
So I guess it's similar to other situations where if it doesn't have enough training data, it'll just confidently make up plausible-looking poo poo. At least the compiler will save your rear end here, unlike let's say in a legal argument you asked it to write :v:


E:
Claude 3 is out, claims to be better than everything in every way, several times by exactly 0.1% :D



https://arstechnica.com/information-technology/2024/03/the-ai-wars-heat-up-with-claude-3-claimed-to-have-near-human-abilities/

Whether or not that's really true is another matter of course.

mobby_6kl fucked around with this message at 22:38 on Mar 4, 2024

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Mega Comrade posted:

It's a good point though. People constantly assume "it's only going to get better" based on basically nothing but it's just something we have gotten used to with technology.
Lots of technologies hit a plateau far faster than people thought they would or the advances get priced out of usefulness.

GenAI has advanced far quicker than I think most experts predicted but it could also just as quickly hit a dead end and stall out for years, it did before until neural networks came along and kick started it back into high gear.

With all the various directions research is going in and the vast amounts of money being poured in, this cycle probably has legs still but who the hell knows.

The real stall is that there isn't an infinite corpus to feed the models, and they've already pulled the low-hanging fruit of an internet scrape and started getting legal action in response. I've been quite happy to see more ways start being proposed to salt images to make them actively harmful to generative AI if included in datasets as well.

I am all for grifters wasting huge amounts of time and money only to learn that they should have just hired someone who went to art school for $40 an hour to do work for hire.

Edit: Same as I will never stop laughing when obvious bad code gets implemented because 'ChatGPT says it works' and they have to hire an actual software engineer at consulting rates to unfuck themselves.

Liquid Communism fucked around with this message at 05:56 on Mar 5, 2024

Shooting Blanks
Jun 6, 2007

Real bullets mess up how cool this thing looks.

-Blade



Liquid Communism posted:

The real stall is that there isn't an infinite corpus to feed the models, and they've already pulled the low-hanging fruit of an internet scrape and started getting legal action in response. I've been quite happy to see more ways start being proposed to salt images to make them actively harmful to generative AI if included in datasets as well.

I am all for grifters wasting huge amounts of time and money only to learn that they should have just hired someone who went to art school for $40 an hour to do work for hire.

Edit: Same as I will never stop laughing when obvious bad code gets implemented because 'ChatGPT says it works' and they have to hire an actual software engineer at consulting rates to unfuck themselves.

Don't forget companies like Air Canada getting burned after they replace their customer service folks with AI - which decides to make up policy on the spot, because it wasn't trained properly.

030524
Mar 5, 2024
gettin burnt' by artificial intelligence :thunk:

smoobles
Sep 4, 2014

Once AI begins scraping AI the hallucinations will improve I'm sure

Tei
Feb 19, 2011

smoobles posted:

Once AI begins scraping AI the hallucinations will improve I'm sure

I don't count on that. Somebody will create algorithms to achieve "feedback suppresion".

We will have to wait a few more years to see the type of damage language models and ai art will have on people.

Staluigi
Jun 22, 2021

thermodynamics cheated

Tei posted:

I don't count on that. Somebody will create algorithms to achieve "feedback suppresion".

We will have to wait a few more years to see the type of damage language models and ai art will have on people.

the humans in the matrix weren't there as a power source, their input was the only means by which ai could harvest original nonhallucinatory data

Lucid Dream
Feb 4, 2003

That boy ain't right.

Staluigi posted:

the humans in the matrix weren't there as a power source, their input was the only means by which ai could harvest original nonhallucinatory data
Humans basically hallucinate their own reality as it is, just with more internal consistency (usually).

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Staluigi posted:

the humans in the matrix weren't there as a power source, their input was the only means by which ai could harvest original nonhallucinatory data
The Wachowski’s original script actually had the humans being used for computation, but the studio thought that was too complicated for viewers to understand, so they changed it to the dumbshit power plant thing that makes no sense.

JazzFlight
Apr 29, 2006

Oooooooooooh!

cat botherer posted:

The Wachowski’s original script actually had the humans being used for computation, but the studio thought that was too complicated for viewers to understand, so they changed it to the dumbshit power plant thing that makes no sense.
Geez, that just makes me think of a cooler way the series could have gone if the robots were using human brains for actually sustaining the simulation (like, Neo's superpowers were an example of a single brain changing the simulation itself). So Neo threatening to wake everyone up to reality would be different to just the physical act of yanking their body out of the pod.
If that were the case, it could have been a message about the world being what people want it to be instead of what they blindly accept.

I dunno, maybe I'm just disillusioned with the sequels and would have preferred almost anything else, lol.

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice

JazzFlight posted:

Geez, that just makes me think of a cooler way the series could have gone if the robots were using human brains for actually sustaining the simulation (like, Neo's superpowers were an example of a single brain changing the simulation itself). So Neo threatening to wake everyone up to reality would be different to just the physical act of yanking their body out of the pod.
If that were the case, it could have been a message about the world being what people want it to be instead of what they blindly accept.

I dunno, maybe I'm just disillusioned with the sequels and would have preferred almost anything else, lol.

Everything you've said, to me, seems to blend perfectly into what the rest of the movie did anyways, just that it would've made a lot more sense with the computational bit.

Gynovore
Jun 17, 2009

Forget your RoboCoX or your StickyCoX or your EvilCoX, MY CoX has Blinking Bewbs!

WHY IS THIS GAME DEAD?!

cat botherer posted:

The Wachowski’s original script actually had the humans being used for computation, but the studio thought that was too complicated for viewers to understand, so they changed it to the dumbshit power plant thing that makes no sense.

...and then, because the power source bit is *literally* bullshit from a scientific standpoint, people assumed that the Wachowskis put it in as a secret signal to the Really Smart People or something.

Tarkus
Aug 27, 2000

mobby_6kl posted:

I finally had a reason to try ChatGPT for code. Told it to replace a function that printed sensor values over serial with one that saves it in a database using the SimplePgSQL library and it came up with this:
**Code**

So, the thing is that the further you get off the beaten path, like you surmised already, it may not have complete information or it may get 'confused' with a similar library. I tried the same thing with the library and got similarly poor (albeit different) results. However, when I put the .h in or the sample code from the library (I tried both) I get what appear to be very good results. When dealing with relatively niche cases like libraries for comparatively lesser known stuff like ESP SQL libraries you can dump some sample stuff that you know works or should work and it can put it together from context. Interestingly enough, Claude and GPT4 did know what the library was but not exactly how to use it. Claude is a very good tool for this because its context window is so much larger, even the free Sonnet version is very handy for this.

What's interesting as well is that when I have bizarre issues that befuddle me, I can give it code, tell it what I'm experiencing and it can usually tell me what the problem is. Especially if it's a programmatic issue, it'll know and tell me 99% of the time. It's an incredibly powerful tool.

Bel Shazar
Sep 14, 2012

Shooting Blanks posted:

Don't forget companies like Air Canada getting burned after they replace their customer service folks with AI - which decides to make up policy on the spot, because it wasn't trained properly.

Heh, I've had human customer service folks do that too...

031124_2
Mar 12, 2024
Burnt by Artificial Intelligence

Adbot
ADBOT LOVES YOU

Noam Chomsky
Apr 4, 2019

:capitalism::dehumanize:


AI software engineer dropping soon

https://www.pcmag.com/news/this-software-engineer-ai-can-train-other-ais-code-websites-by-itself

quote:

In one video, Cognition Labs CEO Scott Wu shows how Devin users are able to view the AI tool's command line, code editor, and workflow as it moves through various steps to complete coding projects and data research tasks. Once Devin receives a request, it's able to scour the internet for educational materials to learn how to complete tasks and can debug its own issues encountered during the engineering process. Users are able to intervene if desired, however.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply