Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
How many quarters after Q1 2016 till Marissa Mayer is unemployed?
1 or fewer
2
4
Her job is guaranteed; what are you even talking about?
View Results
 
  • Post
  • Reply
withak
Jan 15, 2003


Fun Shoe
I tried Google's AI search response beta a month or two ago and basically it was regurgitating the exact the same stuff as the first few hits, which are also usually AI-generated but not labeled as such.

Adbot
ADBOT LOVES YOU

Ruffian Price
Sep 17, 2016

VikingofRock posted:

We are blinding ourselves to the cosmos so that we can have in-flight wifi.
Inherited wealth is blinding us to the cosmos so they can force new solutions to manufactured cravings on us, tyvm

PT6A
Jan 5, 2006

Public school teachers are callous dictators who won't lift a finger to stop children from peeing in my plane
I think the interesting thing about AI is a lot of people gently caress around with it and they're like "it knows the absolute most basic aspects of something I know little or nothing about! Boy that sure is impressive!"

Well, no, it isn't.

OddObserver
Apr 3, 2009

PT6A posted:

I think the interesting thing about AI is a lot of people gently caress around with it and they're like "it knows the absolute most basic aspects of something I know little or nothing about! Boy that sure is impressive!"

Well, no, it isn't.

Nah, it's genuinely impressive that it can simulate knowing that sort of stuff.

It's just that the bullshit autogeneration thing that's frequently an issue when people try to actually use it.

dr_rat
Jun 4, 2001
I mean even if it did provide better search results would Google be able to offer "AI-powered" search results on mass affordably anyway. It was my understanding that a lot of the AI stuff is actually pretty processor intensive and pretty sure google doesn't actually make that much per indvidual search, so not sure how well make each search need resources is going to work out for them.

PT6A
Jan 5, 2006

Public school teachers are callous dictators who won't lift a finger to stop children from peeing in my plane

OddObserver posted:

Nah, it's genuinely impressive that it can simulate knowing that sort of stuff.

In a sense, perhaps. But mainly as a curiosity rather than anything useful, at this point.

pumpinglemma
Apr 28, 2009

DD: Fondly regard abomination.

In maths and CS it’s basically on par with a C student (occasionally a B student) with good writing skills and a penchant for bullshit. This is unironically impressive as all hell and even occasionally useful, but just like a real C student, you should never ever trust it to be right.

Now if someone would make a version that didn’t hallucinate references and was trained on textbooks and journal and conference papers rather than the unfiltered Internet, then I would punch a baby for access to it. Just being able to effectively answer questions like “is there a result in some obscure 1975 paper that does this exact thing I want?” or “has anyone generalised Grib’s Theorem to apply to flangewurblers that don’t grumble?” would save me literal days of work on a regular basis, even if it wasn’t always correct.

Rand Brittain
Mar 25, 2013

"Go on until you're stopped."
A language learning model's ability to bullshit is genuinely impressive. I've seen people play around with these things and it's extremely entertaining to watch it spit out "a version of Hamlet where Claudius is fifty feet tall and invulnerable to all harm" and then "a version of Hamlet where every character is a himbo".

The problem is that the model, as I understand it, is basically incapable of making content you can actually trust because it doesn't actually analyze what it's saying at all, and no matter how advanced it becomes it never can or will. A language model doesn't even try to come up with an answer to your question; it just correlates data to find something that looks like the right answer. It would be insane to ever let it work on anything important (not that anyone would ever do that, ha ha, ha ha, ha ha, weep).

MixMasterMalaria
Jul 26, 2007

Rand Brittain posted:

A language learning model's ability to bullshit is genuinely impressive. I've seen people play around with these things and it's extremely entertaining to watch it spit out "a version of Hamlet where Claudius is fifty feet tall and invulnerable to all harm" and then "a version of Hamlet where every character is a himbo".

The problem is that the model, as I understand it, is basically incapable of making content you can actually trust because it doesn't actually analyze what it's saying at all, and no matter how advanced it becomes it never can or will. A language model doesn't even try to come up with an answer to your question; it just correlates data to find something that looks like the right answer. It would be insane to ever let it work on anything important (not that anyone would ever do that, ha ha, ha ha, ha ha, weep).

"Please formulate a strategy for defending the earth from an impending alien invasion."

ChatGPT posted:


Defending Earth from an impending alien invasion is a challenging scenario, but here's a comprehensive strategy that combines diplomatic, technological, and military approaches:

1. **Gather Intelligence:**
- Utilize advanced satellite and telescope systems to monitor and analyze alien activity.
- Establish international collaborations to pool resources and share intelligence.

2. **Diplomatic Outreach:**
- Initiate communication efforts to establish peaceful contact with the alien civilization, demonstrating our willingness to peacefully coexist.
- Use linguists, anthropologists, and cultural experts to understand their language and intentions.

3. **Global Unity:**
- Create a unified global organization to coordinate efforts, share information, and develop a collective strategy.
- Convene an emergency United Nations assembly to address the threat and formulate a joint response.

4. **Technological Advancement:**
- Mobilize the world's best scientists, engineers, and researchers to develop advanced technologies based on recovered alien technology (if possible).
- Focus on enhancing defense systems, energy production, and communication tools.

5. **Defensive Measures:**
- Develop a multi-layered defense system that includes space-based platforms, ground-based installations, and cyber-defense mechanisms.
- Repurpose existing military assets and design new ones specifically tailored to the threat.

6. **Collaborative Research:**
- Collaborate with the scientific community to understand the alien's biology, technology, and weaknesses.
- Share findings globally to expedite research progress.

7. **Public Awareness:**
- Inform the public about the situation to prevent panic and promote global cooperation.
- Encourage citizens to contribute ideas and support the effort in various ways.

8. **Crisis Management:**
- Establish emergency response teams to manage potential fallout from alien actions.
- Develop contingency plans for evacuation, resource allocation, and communication breakdowns.

9. **Unconventional Strategies:**
- Explore unconventional tactics that might exploit the invaders' potential vulnerabilities, such as exploiting environmental factors unique to Earth.

10. **Last Resort Measures:**
- Prepare for worst-case scenarios, such as a failed peaceful resolution or overwhelming force.
- Develop and maintain a network of underground bunkers and shelters to protect essential personnel and the survival of humanity.

11. **Humanitarian Focus:**
- Prioritize the safety and well-being of Earth's population, regardless of nationality or status.
- Establish protocols for assisting affected regions and refugees.

12. **Learning from History:**
- Study historical conflicts and encounters to identify patterns and potential strategies that could be applied to the current situation.

Remember that the success of any strategy relies on international cooperation, advanced technology development, and a balanced approach that prioritizes both defense and diplomacy.

"Thank you, I will bring this to the committee. Let's hope it works!"

ChatGPT posted:

You're welcome! I'm glad I could help. Best of luck with your committee discussions. Remember, adapting the strategy to the evolving situation and being open to new ideas will be key. Stay united, stay informed, and stay prepared. If you have any more questions in the future, feel free to ask. Good luck, and may your efforts lead to a successful defense of Earth!

Ghost Leviathan
Mar 2, 2017

Exploration is ill-advised.
Thing is it's programmed with the formatting for information, but has no ability to actually understand the content or the context it's in. AI stuff is very recognisable once you know what to look for because it's a particular kind of tedious waffle that's blatantly trying to pad the word count and will invent nonsense to fill in gaps. For scientific and legal applications, this is much worse than useless, and for corporate applications this is literally only applicable to poo poo that isn't actually necessary in the first place.

dr_rat posted:

I mean even if it did provide better search results would Google be able to offer "AI-powered" search results on mass affordably anyway. It was my understanding that a lot of the AI stuff is actually pretty processor intensive and pretty sure google doesn't actually make that much per indvidual search, so not sure how well make each search need resources is going to work out for them.

One reason I'm really hoping this poo poo crashes at least as fast as NFTs did.

The Lone Badger
Sep 24, 2007

In the Good Timeline this would lead to a fashion for extremely concise and specific writing because the moment they encounter a hint of waffle your audience will check-out.

mllaneza
Apr 28, 2007

Veteran, Bermuda Triangle Expeditionary Force, 1993-1952




Rand Brittain posted:

A language learning model's ability to bullshit is genuinely impressive.

LLMs have proven to be impressively useful for generating corporate bullshit like self-reviews, cover letters, and resume updates.

dr_rat
Jun 4, 2001

The Lone Badger posted:

In the Good Timeline this would lead to a fashion for extremely concise and specific writing because the moment they encounter a hint of waffle your audience will check-out.

Going to open up new employment opportunities in those who can type waffle quicker and for less cost than the AI chat software's. Can you type at 400 words a minute and are happy to work for .034 cents an hour, boy do we have an opportunity for you!!!

Ruffian Price
Sep 17, 2016

dr_rat posted:

It was my understanding that a lot of the AI stuff is actually pretty processor intensive and pretty sure google doesn't actually make that much per indvidual search, so not sure how well make each search need resources is going to work out for them.
I still sometimes see people going "oh, they're simulating typing so it feels more like a real chat" and have to stress that no, this is as fast as the supercomputer* can go answering your 2+2 question, when the inefficiency is part of the design. Every company implementing this is making it a loss leader hoping that the outside funding outpaces the ridiculous operating costs (OAI's text prediction reportedly costs $700,000 a day). The minute it's clear the infinite growth is not gonna come they'll axe this and everybody who was all "singularity is imminent" in 2022 because text prediction got good enough for spam is gonna pretend they weren't buying the hype at all

*) a single session gets the compute equivalent to 1/50 of a Cray XT4 from 2006

Ghost Leviathan
Mar 2, 2017

Exploration is ill-advised.
For someone wishing webrings would return, I just realised the confederation of Nintendo wikis I use basically functions that way. Usable internet requires active community co-ops.

pumpinglemma
Apr 28, 2009

DD: Fondly regard abomination.

Ruffian Price posted:

I still sometimes see people going "oh, they're simulating typing so it feels more like a real chat" and have to stress that no, this is as fast as the supercomputer* can go answering your 2+2 question, when the inefficiency is part of the design. Every company implementing this is making it a loss leader hoping that the outside funding outpaces the ridiculous operating costs (OAI's text prediction reportedly costs $700,000 a day). The minute it's clear the infinite growth is not gonna come they'll axe this and everybody who was all "singularity is imminent" in 2022 because text prediction got good enough for spam is gonna pretend they weren't buying the hype at all

*) a single session gets the compute equivalent to 1/50 of a Cray XT4 from 2006
Sorry, but that’s such a suspiciously specific measurement I’ve got to ask: how does 1/50 of a Cray XT4 from 2006 compare to, say, a medium-spec graphics card from 2023? (And what do you mean by compute used over a single session? Because that sounds like a measure of total CPU cycles used or similar but then comparing that to a specific model of computer makes no sense.)

Ruffian Price
Sep 17, 2016

Maybe it was a dumb way of representing it, more about processing power available than compute used (if you're playing around with local models flops are a good predictor of how fast the tokens will generate though. when you look at community charts comparing GPUs the relationship's almost linear). The XT4 was around a hundred teraflops and GPT 3.5 Turbo ran off clusters of Nvidia A100s, with four sessions at a time on a single unit. Tbh now that I check the A100's specs again, assuming the environment was optimized to run on the hardware a single unit would be closer in performance to an entire 2008 Jaguar

exmachina
Mar 12, 2006

Look Closer

Rand Brittain posted:

A language learning model's ability to bullshit is genuinely impressive. I've seen people play around with these things and it's extremely entertaining to watch it spit out "a version of Hamlet where Claudius is fifty feet tall and invulnerable to all harm" and then "a version of Hamlet where every character is a himbo".

The problem is that the model, as I understand it, is basically incapable of making content you can actually trust because it doesn't actually analyze what it's saying at all, and no matter how advanced it becomes it never can or will. A language model doesn't even try to come up with an answer to your question; it just correlates data to find something that looks like the right answer. It would be insane to ever let it work on anything important (not that anyone would ever do that, ha ha, ha ha, ha ha, weep).

If the model had the ability to admit it doesn't know something, that would make it much more useful. But as I understand it the model is just saying "what are the four characters most likely to occur next".

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.
I wouldn't say it tries to make something look like the right answer so much as AN answer. Answer in the form of an answer.

Neito
Feb 18, 2009

😌Finally, an avatar the describes my love of tech❤️‍💻, my love of anime💖🎎, and why I'll never see a real girl 🙆‍♀️naked😭.

exmachina posted:

If the model had the ability to admit it doesn't know something, that would make it much more useful. But as I understand it the model is just saying "what are the four characters most likely to occur next".

Pretty much this. It's a little more advanced than that (what you're describing is basically a character-based Markov Chain, which can produce some... interesting results (for example, running your post though a word-based Markov Chain produces:

Markov Chain posted:

but as i understand it doesnt know something that would make it the ability to occur next. as i understand it the model is just saying what are the ability to admit it

which is tech we've had since the 80s.) ChatGPT's major advantage over previous solutions is largely just the size of the trainning data.

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

MixMasterMalaria posted:

"Please formulate a strategy for defending the earth from an impending alien invasion."

"Thank you, I will bring this to the committee. Let's hope it works!"

On the plus side, at least this means it was trained on xcom!

MadDogMike
Apr 9, 2008

Cute but fanged

Volmarias posted:

On the plus side, at least this means it was trained on xcom!

Better not trust anything it says has a 95% success chance then.

Mister Facetious
Apr 21, 2007

I think I died and woke up in L.A.,
I don't know how I wound up in this place...

:canada:

MixMasterMalaria posted:

"Please formulate a strategy for defending the earth from an impending alien invasion."

"Surrender as quickly as possible, so nothing is destroyed, and shareholders are protected."

goatsestretchgoals
Jun 4, 2011

https://youtu.be/QF_G2e7FBwE?si=FN5cDa2J_5laK_tB

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

MadDogMike posted:

Better not trust anything it says has a 95% success chance then.

:xcom:

America Inc.
Nov 22, 2013

I plan to live forever, of course, but barring that I'd settle for a couple thousand years. Even 500 would be pretty nice.
So the topic of AIs being trained on copyrighted works and stealing from creators has already been explored, but I also find it curious the extent to which people in the third world are exploited to train these algorithms as well.

It seems as if tech companies have graduated from sweatshops to offices where people have to review extreme content from social media or label data for OpenAI or years before that Mechanical Turk. Offices where people get fired if they unionize or speak out.

The technology keeps getting more advanced but the power dynamics or how wealth gets distributed stays exactly the same.

Vegetable
Oct 22, 2010

America Inc. posted:

So the topic of AIs being trained on copyrighted works and stealing from creators has already been explored, but I also find it curious the extent to which people in the third world are exploited to train these algorithms as well.

It seems as if tech companies have graduated from sweatshops to offices where people have to review extreme content from social media or label data for OpenAI or years before that Mechanical Turk. Offices where people get fired if they unionize or speak out.

The technology keeps getting more advanced but the power dynamics or how wealth gets distributed stays exactly the same.
Yeah, this problem long precedes AI. Some human has to review violent, sexual, exploitative or otherwise questionable content. Tech companies have massive armies of outsourced, underpaid workers for this. Data labeling for AI is just an extension of it. And if I’m being honest, it’s probably a slightly better gig because you’re not looking exclusively at porn or beheadings.

paranoid randroid
Mar 4, 2007
the democrats have overseen the repeal of roe v wade, multiple laws making it illegal to be gay or trans in five states with no federal action
why am i supposed to vote for these abject loving losers again?

(USER WAS PUT ON PROBATION FOR THIS POST)

Shadowlyger
Nov 5, 2009

ElvUI super fan at your service!

Ask me any and all questions about UI customization via PM

paranoid randroid posted:

the democrats have overseen the repeal of roe v wade, multiple laws making it illegal to be gay or trans in five states with no federal action
why am i supposed to vote for these abject loving losers again?

Because not voting for them makes the problem worse.

Agents are GO!
Dec 29, 2004

paranoid randroid posted:

[Not a Tech Nightmare]

As much as I love your avatar, I don't think this is a "Tech Nightmare". :shobon:

mllaneza
Apr 28, 2007

Veteran, Bermuda Triangle Expeditionary Force, 1993-1952




paranoid randroid posted:

the democrats have overseen the repeal of roe v wade, multiple laws making it illegal to be gay or trans in five states with no federal action
why am i supposed to vote for these abject loving losers again?

The Federal courts are doing a pretty good job of blocking implementation of all that poo poo. There isn't the urgency for Congressional or Executive action there would be otherwise.

My suggestion would be to register Democratic and work for actual leftists who will do something in primary campaigns. The choices are to spin up a true left party, which won't show results anytime soon, or to build the leftist caucus in the Democratic Party which is starting to work as a strategy already.

Ruffian Price
Sep 17, 2016

Agents are GO! posted:

As much as I love your avatar, I don't think this is a "Tech Nightmare". :shobon:
if financial companies and realty squatting companies can now be tech companies, we should label this political nightmare a tech nightmare and maybe get some VC on board

BRJurgis
Aug 15, 2007

Well I hear the thunder roll, I feel the cold winds blowing...
But you won't find me there, 'cause I won't go back again...
While you're on smoky roads, I'll be out in the sun...
Where the trees still grow, where they count by one...
I don't usually read this thread, but am I correct that usage of the word AI and the ideas people have about it are insane?

Like some younger guys at work (and credulous tech journal readers of any age) literally use the word like magic. AI gonna change everything! Solve all problems! Anything you can imagine and also all the things you can't!

Was talking about sustainability and limits of growth with one guy and he's like "AI is gonna give us the answer!"

"But we know the answer, it's stop doing what we're doing yesterday and we're still not stopping. AI can't "solve" physical limitations and thermodynamics and such".

"We don't know that it can't! AI doesn't have limitations like that!"

Jon
Nov 30, 2004

BRJurgis posted:

I don't usually read this thread, but am I correct that usage of the word AI and the ideas people have about it are insane?

Like some younger guys at work (and credulous tech journal readers of any age) literally use the word like magic. AI gonna change everything! Solve all problems! Anything you can imagine and also all the things you can't!

Was talking about sustainability and limits of growth with one guy and he's like "AI is gonna give us the answer!"

"But we know the answer, it's stop doing what we're doing yesterday and we're still not stopping. AI can't "solve" physical limitations and thermodynamics and such".

"We don't know that it can't! AI doesn't have limitations like that!"

I might be showing my age here, but Michio Kaku was pretty well known among tech bros around a decade ago- if they're familiar, you might want to point at Kaku's statements on AI: https://www.businessinsider.com/ai-theoretical-physicist-says-fears-about-chatgpt-bard-overblown-2023-8

MrYenko
Jun 18, 2012

#2 isn't ALWAYS bad...

Jon posted:

I might be showing my age here, but Michio Kaku was pretty well known among tech bros around a decade ago- if they're familiar, you might want to point at Kaku's statements on AI: https://www.businessinsider.com/ai-theoretical-physicist-says-fears-about-chatgpt-bard-overblown-2023-8

Noted reliable observer Michio Kaku.

Heck Yes! Loam!
Nov 15, 2004

a rich, friable soil containing a relatively equal mixture of sand and silt and a somewhat smaller proportion of clay.
Has anyone gotten Ja Rule's take on AI

Jon
Nov 30, 2004

MrYenko posted:

Noted reliable observer Michio Kaku.

I don't understand the implication you're making

Antigravitas
Dec 8, 2019

Die Rettung fuer die Landwirte:

BRJurgis posted:

I don't usually read this thread, but am I correct that usage of the word AI and the ideas people have about it are insane?

Like some younger guys at work (and credulous tech journal readers of any age) literally use the word like magic. AI gonna change everything! Solve all problems! Anything you can imagine and also all the things you can't!

Was talking about sustainability and limits of growth with one guy and he's like "AI is gonna give us the answer!"

"But we know the answer, it's stop doing what we're doing yesterday and we're still not stopping. AI can't "solve" physical limitations and thermodynamics and such".

"We don't know that it can't! AI doesn't have limitations like that!"

None of these people know how an LLM works and what its limitations are.

Yes, the Discourse is utterly insane.

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.
Sounds like they are confusing it with the sci-fi idea of a technological singularity.

Adbot
ADBOT LOVES YOU

SaTaMaS
Apr 18, 2003

Vegetable posted:

Yeah, this problem long precedes AI. Some human has to review violent, sexual, exploitative or otherwise questionable content. Tech companies have massive armies of outsourced, underpaid workers for this. Data labeling for AI is just an extension of it. And if I’m being honest, it’s probably a slightly better gig because you’re not looking exclusively at porn or beheadings.

It seems like after ChatGPT N has been trained on questionable content, it should be able to flag a lot of that content when training ChatGPT N+1?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply