Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BougieBitch
Oct 2, 2013

Basic as hell

cat botherer posted:

Much like a baby, that’s loving stupid.

Pretty sure the point is to actually develop a model for, like, language acquisition research, not just a novel way to spend a lot of extra time and effort to make a chatGPT-like

Whether it actually succeeds at that goal is more reading than I'm gonna do right now, but it's at least notionally understandable

Adbot
ADBOT LOVES YOU

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

I know it's been a month now, I'm struggling with a bunch of physical issues that are making it hard to read my screen. Would you still want a reply to this? There's been quite a few things that have happened on top of everything since my last message and I'd like to address them as well.



cat botherer posted:

Much like a baby, that’s loving stupid.

I remember you posting about being a data scientist, can I talk to you about a small job? I can buy you Plat on top of payment. My profile has discord or email, get ahold of me and I can get you a Plat Upgrade certificate.

Tei
Feb 19, 2011

LASER BEAM DREAM posted:

My company opened up Github Copilot for all developers this week. I guess they're not worried about lawsuits against OpenAI? This is a big company and I'm sure corporate lawyers reviewed this before giving approval.

The power of Hopium. They want a world where a big company have zero employes. AI seems to promise that, a world where companies don't have to hire people.

Maybe theres no gold in the "gold mine", but somebody is going to buy the mining tools and somebody is going to make a fortune selling it. It does not matter if the mine has gold or not.

Tei fucked around with this message at 13:37 on Feb 2, 2024

XYZAB
Jun 29, 2003

HNNNNNGG!!
Apologies if there’s a better thread for this, but can anyone here recommend to me a decent goon-approved “audio file to text” AI transcription github project that I can install locally to chew over a bunch of .wav files I’ve got kicking around, that doesn’t require me to upload those files to a third party service for processing? I.e., If you’ve used something that did a decent job with transcribing noisy lecture audio using an RTX card, that also isn’t the newest version of Microsoft Word, that’s what I’m looking for.

SCheeseman
Apr 23, 2003

XYZAB posted:

Apologies if there’s a better thread for this, but can anyone here recommend to me a decent goon-approved “audio file to text” AI transcription github project that I can install locally to chew over a bunch of .wav files I’ve got kicking around, that doesn’t require me to upload those files to a third party service for processing? I.e., If you’ve used something that did a decent job with transcribing noisy lecture audio using an RTX card, that also isn’t the newest version of Microsoft Word, that’s what I’m looking for.

OpenAI's Whisper. There's a lot of options, the OG, a C++ port or something with a web interface.

020424_3
Feb 5, 2024
:qq: i've been made obsolete by an algorithm :qq:

MixMasterMalaria
Jul 26, 2007

020424_3 posted:

:qq: i've been made obsolete by an algorithm :qq:

Algorithms can't threaten me with obsolescence, I was never relevant to begin with.

020524
Feb 6, 2024
[WARNING] This A.I. is mimicking the illness of its Administrator(s) :sick:

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
Ansys, the engineering analysis software suite that pretty much every single engineering company in the world uses, wants to put ChatGPT into their products. Or rather, they have already done so and want to expand AI features.

You are an engineer and specialist in hypersonic cruise missiles. Here is access to my entire design catalogue please design me a better missile.

I mean most surely that’s how the execs think it works but lmao.

Waffle House
Oct 27, 2004

You follow the path
fitting into an infinite pattern.

Yours to manipulate, to destroy and rebuild.

Now, in the quantum moment
before the closure
when all become one.

One moment left.
One point of space and time.

I know who you are.

You are Destiny.


Boris Galerkin posted:

Ansys, the engineering analysis software suite that pretty much every single engineering company in the world uses, wants to put ChatGPT into their products. Or rather, they have already done so and want to expand AI features.

You are an engineer and specialist in hypersonic cruise missiles. Here is access to my entire design catalogue please design me a better missile.

I mean most surely that’s how the execs think it works but lmao.

Is there any kind of security against researchers traipsing through accessible ontology to find backdoors into cached and firewalled content? Like could I just couch the prompt correctly to encourage the model to go right through pytorch on the back end and spit out a bunch of classified results?

Kagrenak
Sep 8, 2010

Waffle House posted:

Is there any kind of security against researchers traipsing through accessible ontology to find backdoors into cached and firewalled content? Like could I just couch the prompt correctly to encourage the model to go right through pytorch on the back end and spit out a bunch of classified results?

Each inference instance is going to have independent resources. I doubt they'd reinforcement train their hosted models against the new data or anything insane like that. The risk profile would probably be similar to any other cloud service, which is to say pretty high for classified data. I would also imagine that if people are using this software in a classified mode, that module would be disabled or only work using some sort of on-prem edge node for model inference. Some laboratory software is like this, certain functionality turns off if it's not supported in the 21 CFR 11 compliant mode.

Waffle House
Oct 27, 2004

You follow the path
fitting into an infinite pattern.

Yours to manipulate, to destroy and rebuild.

Now, in the quantum moment
before the closure
when all become one.

One moment left.
One point of space and time.

I know who you are.

You are Destiny.


Kagrenak posted:

Each inference instance is going to have independent resources. I doubt they'd reinforcement train their hosted models against the new data or anything insane like that. The risk profile would probably be similar to any other cloud service, which is to say pretty high for classified data. I would also imagine that if people are using this software in a classified mode, that module would be disabled or only work using some sort of on-prem edge node for model inference. Some laboratory software is like this, certain functionality turns off if it's not supported in the 21 CFR 11 compliant mode.

Okay, so there is actually bulwark, that's fantastic. Although it'd be very cyberpunk and GET ME MY GOGGLES, the ability to back-alley your way around into everyone's topology would probably uproot some scary things.

sinky
Feb 22, 2011



Slippery Tilde
https://x.com/DrCJ_Houldcroft/status/1758111493181108363?s=20

i'm 'Retat'



Bug Squash
Mar 18, 2009


Looking forwards to every branch of science auto-Sokaling themselves.

Rappaport
Oct 2, 2013

sinky posted:

i'm 'Retat'



What the Hell was the prompt here? "Give me a rat with a giant dong umbilical cord and weird cell cluster globules"? :psylon:

Cicero
Dec 17, 2003

Jumpjet, melta, jumpjet. Repeat for ten minutes or until victory is assured.
OpenAI announced a new text-to-video model with some very impressive (albeit certainly cherry picked) examples: https://openai.com/sora

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.
Damnit Frontiers, stop giving open access journals a bad name!

KingKalamari
Aug 24, 2007

Fuzzy dice, bongos in the back
My ship of love is ready to attack

Cicero posted:

OpenAI announced a new text-to-video model with some very impressive (albeit certainly cherry picked) examples: https://openai.com/sora

Oh boy, this can only lead to good things in an election year! Looking forward to it being even easier to spread misinformation online while making the livelihoods of video production workers even more horrible! /s

Freakazoid_
Jul 5, 2013


Buglord

Testtomcells are concentrated in the butt.

Jamwad Hilder
Apr 18, 2007

surfin usa

Cicero posted:

OpenAI announced a new text-to-video model with some very impressive (albeit certainly cherry picked) examples: https://openai.com/sora

This gives me such a bad gut feeling. People are going to do awful stuff with this.

smoobles
Sep 4, 2014

yeah it's all over once we have realistic video AI

The Internet will no longer be reliable for anything, and the great Logging Off will begin.

selec
Sep 6, 2003

The thing about AI video is that there's a way you can present content that establishes your reputation as an information source that will further guarantee the trust you build. Were I a political party, say, I'd just go with this:

-We have a site where you can view any video of our candidates or reps, and we guarantee none of it will have ever been touched by AI.
-You then enforce a policy whereby you don't work with vendors or any candidate or rep who will not agree to not use any machine-generated content of any type.
-You rigorously enforce that and police it and make a big loving deal when you catch people violating the agreement.
-You also pivot to a more policy-focused set of priorities, with genuine metrics for how the public who you're inviting to trust you know that you're hitting those goals. You give them a dashboard they can check on, how close are we to the things we're working on? You can fake video, you can fake audio, you can fake a whole press kit explaining how great the new road that doesn't actually exist that you just cut the ribbon for is. But you cannot argue with "we said you'd have free health care, and now you do" or "we said weed would be legal, and now it is" or "we said you'd get your college debt forgiven, and now it is". The ability to present material, verifiable proof that you improved a voter's life will overpower any potential wariness of being scammed by fake announcements or fake dramatic reveals. Take the personalities and media training and PR talk and slickness away from the whole thing, and make it about results. If anything can be faked except the experience of a voter navigating society, then you better figure out how to change that single voter's perception in a way that cannot be faked, so they can never feel cheated believing you were going to fix something you never did.

This won't work because the temptations are too great. You won't ever be able to find a group of marketing professionals and content people and political actors who are all so committed to the urgency of creating a reliable source for verified-real content that this ever works. The temptation to use just a few seconds of generated stock Attractive Family You Identify With Based On Your Marketing Profile is just too great, the initial KPIs and reports your consultants generate for you too good.

The problem isn't that some people will use this technology for bad ends. It's that nobody with the cash to make the decisions will be able to be relied on to resist whatever short-term incentives there are to use it. The principles just don't matter to people who matter enough to make the decisions.

selec fucked around with this message at 18:43 on Feb 16, 2024

Seph
Jul 12, 2004

Please look at this photo every time you support or defend war crimes. Thank you.
This will basically destroy any remaining credibility that videos have on social media. There will be zero cost to create inflammatory, viral propaganda videos to serve whatever purpose you desire.

“A Biden supporter stealing ballots out of a ballot box in a conservative district”

“A black man assaulting a white woman in broad daylight”

“A CNN broadcast showing a person wearing a MAGA hat shooting up a school”

You will have to decide which people and institutions you trust implicitly since nothing will be independently verifiable outside of what you have seen with your own eyes. Anything outside of your trust network will automatically discarded. Our news and media landscape will become even more fragmented and disconnected as these trust networks develop.

This is already happening somewhat since there have been plenty of manipulated or heavily edited videos out there. But the rate was slow enough that it was possible for people to independently verify videos. You could see someone on Twitter debunking a fake Ukrainian war crime, for example. Now with genAI there will be a flood of fake videos making independent verification impossible. The default will be that everything is fake unless someone you trust says it is true.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.
I think it will be quite some time before AI generated videos are convincing. For the time being, I don’t think that it would be broadly more useful than the animation equivalent of tweening. That isn’t nothing, or course, but it doesn’t mean that all video is untrustworthy and everything is lost.

smoobles
Sep 4, 2014

cat botherer posted:

I think it will be quite some time before AI generated videos are convincing.

This is true of everyone posting in this thread, but countless boomers on Facebook are already engaging with AI pictures thinking they're real. Not all generations have equal media literacy.

D O R K Y
Sep 1, 2001

A scary bit in their technical report its able to extend videos forward or backwards in time. Like its outpainting temporally. The examples are all stated to be extended from generated content but I can't imagine a technical reason why real video couldn't be extended if not with this than with something like it.

I can imagine the hit rate of fooling someone into believing ai generated video is real jumps up significantly if you prepend it with real video.

smoobles
Sep 4, 2014

D O R K Y posted:

A scary bit in their technical report its able to extend videos forward or backwards in time. Like its outpainting temporally. The examples are all stated to be extended from generated content but I can't imagine a technical reason why real video couldn't be extended if not with this than with something like it.

I can imagine the hit rate of fooling someone into believing ai generated video is real jumps up significantly if you prepend it with real video.

Doing a "Greedo shot first" edit but to actual CCTV footage of a crime

Owling Howl
Jul 17, 2019

smoobles posted:

This is true of everyone posting in this thread, but countless boomers on Facebook are already engaging with AI pictures thinking they're real. Not all generations have equal media literacy.

Also true of insane rants on random blogs or social media. AI generated content, whether it's text, image or video, might exacerbate issues we have been dealing with on the internet for 20-30 years but it is not a new issue.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Replacing customer support with AI chatbots going about as expected.

https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot

quote:

After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline's bereavement travel policy.

...

According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot's misleading information because Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions," a court order said.

SCheeseman
Apr 23, 2003

Seph posted:

This will basically destroy any remaining credibility that videos have on social media. There will be zero cost to create inflammatory, viral propaganda videos to serve whatever purpose you desire.

“A Biden supporter stealing ballots out of a ballot box in a conservative district”

“A black man assaulting a white woman in broad daylight”

“A CNN broadcast showing a person wearing a MAGA hat shooting up a school”

You will have to decide which people and institutions you trust implicitly since nothing will be independently verifiable outside of what you have seen with your own eyes. Anything outside of your trust network will automatically discarded. Our news and media landscape will become even more fragmented and disconnected as these trust networks develop.

This is already happening somewhat since there have been plenty of manipulated or heavily edited videos out there. But the rate was slow enough that it was possible for people to independently verify videos. You could see someone on Twitter debunking a fake Ukrainian war crime, for example. Now with genAI there will be a flood of fake videos making independent verification impossible. The default will be that everything is fake unless someone you trust says it is true.

That we'll figure out how to make tools that can visually create anything with relative ease was an inevitability once cavemen started dragging charcoal across cave walls, if it wasn't AI it was going to be real time 3D game engines. Though ultimately it will probably be a tool that combines both. There are people that still believe Obama was born in Kenya, humans have been ignoring evidence of truth that doesn't align with their hard-set preconceptions since the beginning of civilization. You say it's already happening somewhat, I think it has already happened entirely.

Will things get worse? It'll definitely cause people to trust video at face value even less than they do now. Maybe that's a good thing, because to be frank, no one ever really should have been doing that in the first place.

SCheeseman fucked around with this message at 09:46 on Feb 17, 2024

The Artificial Kid
Feb 22, 2002
Plibble

quote:

the airline should not be liable for the chatbot's misleading information because Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions,"
More so than the human employees whose actions the company can sometimes be held liable for?

sinky
Feb 22, 2011



Slippery Tilde
The AI chatbots have a better union.

MixMasterMalaria
Jul 26, 2007

sinky posted:

The AI chatbots have a better union.

Right now the AI chatbots represent capital, not labor, so of course they have special privilege.

Tei
Feb 19, 2011

The Artificial Kid posted:

quote:

the airline should not be liable for the chatbot's misleading information because Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions,"

I hope that a judge disnotion this idea with extreme prejudice, because ...Oh boy, what a escapegoat to do lie through their teeth and get away with it.

Bremen
Jul 20, 2006

Our God..... is an awesome God

Tei posted:

I hope that a judge disnotion this idea with extreme prejudice, because ...Oh boy, what a escapegoat to do lie through their teeth and get away with it.

"We don't lie to our customers, we merely created a machine to lie to our customers. Checkmate."

Lemming
Apr 21, 2008

smoobles posted:

This is true of everyone posting in this thread, but countless boomers on Facebook are already engaging with AI pictures thinking they're real. Not all generations have equal media literacy.

If you post a picture of sometime with a caption of a quote, people will believe it's something they said. This problem goes way beyond AI

Al!
Apr 2, 2010

:coolspot::coolspot::coolspot::coolspot::coolspot:
Could you please just rename this loving thread.

Abhorrence
Feb 5, 2010

A love that crushes like a mace.

Al! posted:

Could you please just rename this loving thread.

Most people would be flattered by all the attention, you know.

GABA ghoul
Oct 29, 2011

Seph posted:

This will basically destroy any remaining credibility that videos have on social media. There will be zero cost to create inflammatory, viral propaganda videos to serve whatever purpose you desire.

“A Biden supporter stealing ballots out of a ballot box in a conservative district”

“A black man assaulting a white woman in broad daylight”

“A CNN broadcast showing a person wearing a MAGA hat shooting up a school”

You will have to decide which people and institutions you trust implicitly since nothing will be independently verifiable outside of what you have seen with your own eyes. Anything outside of your trust network will automatically discarded. Our news and media landscape will become even more fragmented and disconnected as these trust networks develop.

This is already happening somewhat since there have been plenty of manipulated or heavily edited videos out there. But the rate was slow enough that it was possible for people to independently verify videos. You could see someone on Twitter debunking a fake Ukrainian war crime, for example. Now with genAI there will be a flood of fake videos making independent verification impossible. The default will be that everything is fake unless someone you trust says it is true.

Sounds like an improvement tbh. A social media where the first post under every video is always "source your video or it didn't happen" may be better than what we have now.

I'm coming here after reading a Twitter post where some random guy posted a video he claims shows Navalny and an MI6 agent plotting a coup against Putin. None of the people in the video look even remotely like Alexei Navalny nor do they discuss anything suspicious, illegal or unusual, yet the replies are 90% ultra gullible people going on about how he deserved to be in jail for this and what a scandal it is that the lamestream media is not reporting it. It would be a huge improvement if more of these people were conditioned to outright dismiss the video, unless someone who can actually Google what Navalny looks like or form a coherent argument why the discussed content is suspicious (aka a journalist, even if it's a right wing one) presents it. It's how the world mostly used to work before the internet. You couldn't just watch some random videos with unknown origins and context.

Adbot
ADBOT LOVES YOU

Shrecknet
Jan 2, 2005




I wonder if this "AI Prompt Writer" might be interested in this idea I'm developing, a sort of right... of copy?

Regardless, this completely hilarious self-own is so rich I could drizzle it on a pancake.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply