|
cat botherer posted:Much like a baby, that’s loving stupid. Pretty sure the point is to actually develop a model for, like, language acquisition research, not just a novel way to spend a lot of extra time and effort to make a chatGPT-like Whether it actually succeeds at that goal is more reading than I'm gonna do right now, but it's at least notionally understandable
|
# ? Feb 2, 2024 00:55 |
|
|
# ? Jun 8, 2024 07:47 |
|
Kavros posted:Ok. I know it's been a month now, I'm struggling with a bunch of physical issues that are making it hard to read my screen. Would you still want a reply to this? There's been quite a few things that have happened on top of everything since my last message and I'd like to address them as well. cat botherer posted:Much like a baby, that’s loving stupid. I remember you posting about being a data scientist, can I talk to you about a small job? I can buy you Plat on top of payment. My profile has discord or email, get ahold of me and I can get you a Plat Upgrade certificate.
|
# ? Feb 2, 2024 09:34 |
|
LASER BEAM DREAM posted:My company opened up Github Copilot for all developers this week. I guess they're not worried about lawsuits against OpenAI? This is a big company and I'm sure corporate lawyers reviewed this before giving approval. The power of Hopium. They want a world where a big company have zero employes. AI seems to promise that, a world where companies don't have to hire people. Maybe theres no gold in the "gold mine", but somebody is going to buy the mining tools and somebody is going to make a fortune selling it. It does not matter if the mine has gold or not. Tei fucked around with this message at 13:37 on Feb 2, 2024 |
# ? Feb 2, 2024 13:35 |
|
Apologies if there’s a better thread for this, but can anyone here recommend to me a decent goon-approved “audio file to text” AI transcription github project that I can install locally to chew over a bunch of .wav files I’ve got kicking around, that doesn’t require me to upload those files to a third party service for processing? I.e., If you’ve used something that did a decent job with transcribing noisy lecture audio using an RTX card, that also isn’t the newest version of Microsoft Word, that’s what I’m looking for.
|
# ? Feb 2, 2024 23:23 |
|
XYZAB posted:Apologies if there’s a better thread for this, but can anyone here recommend to me a decent goon-approved “audio file to text” AI transcription github project that I can install locally to chew over a bunch of .wav files I’ve got kicking around, that doesn’t require me to upload those files to a third party service for processing? I.e., If you’ve used something that did a decent job with transcribing noisy lecture audio using an RTX card, that also isn’t the newest version of Microsoft Word, that’s what I’m looking for. OpenAI's Whisper. There's a lot of options, the OG, a C++ port or something with a web interface.
|
# ? Feb 3, 2024 05:34 |
i've been made obsolete by an algorithm
|
|
# ? Feb 5, 2024 02:39 |
|
020424_3 posted:i've been made obsolete by an algorithm Algorithms can't threaten me with obsolescence, I was never relevant to begin with.
|
# ? Feb 5, 2024 22:57 |
[WARNING] This A.I. is mimicking the illness of its Administrator(s) :sick:
|
|
# ? Feb 6, 2024 01:38 |
|
Ansys, the engineering analysis software suite that pretty much every single engineering company in the world uses, wants to put ChatGPT into their products. Or rather, they have already done so and want to expand AI features. You are an engineer and specialist in hypersonic cruise missiles. Here is access to my entire design catalogue please design me a better missile. I mean most surely that’s how the execs think it works but lmao.
|
# ? Feb 7, 2024 05:08 |
|
Boris Galerkin posted:Ansys, the engineering analysis software suite that pretty much every single engineering company in the world uses, wants to put ChatGPT into their products. Or rather, they have already done so and want to expand AI features. Is there any kind of security against researchers traipsing through accessible ontology to find backdoors into cached and firewalled content? Like could I just couch the prompt correctly to encourage the model to go right through pytorch on the back end and spit out a bunch of classified results?
|
# ? Feb 7, 2024 18:15 |
|
Waffle House posted:Is there any kind of security against researchers traipsing through accessible ontology to find backdoors into cached and firewalled content? Like could I just couch the prompt correctly to encourage the model to go right through pytorch on the back end and spit out a bunch of classified results? Each inference instance is going to have independent resources. I doubt they'd reinforcement train their hosted models against the new data or anything insane like that. The risk profile would probably be similar to any other cloud service, which is to say pretty high for classified data. I would also imagine that if people are using this software in a classified mode, that module would be disabled or only work using some sort of on-prem edge node for model inference. Some laboratory software is like this, certain functionality turns off if it's not supported in the 21 CFR 11 compliant mode.
|
# ? Feb 7, 2024 18:26 |
|
Kagrenak posted:Each inference instance is going to have independent resources. I doubt they'd reinforcement train their hosted models against the new data or anything insane like that. The risk profile would probably be similar to any other cloud service, which is to say pretty high for classified data. I would also imagine that if people are using this software in a classified mode, that module would be disabled or only work using some sort of on-prem edge node for model inference. Some laboratory software is like this, certain functionality turns off if it's not supported in the 21 CFR 11 compliant mode. Okay, so there is actually bulwark, that's fantastic. Although it'd be very cyberpunk and GET ME MY GOGGLES, the ability to back-alley your way around into everyone's topology would probably uproot some scary things.
|
# ? Feb 8, 2024 00:07 |
|
https://x.com/DrCJ_Houldcroft/status/1758111493181108363?s=20 i'm 'Retat'
|
# ? Feb 15, 2024 17:10 |
|
Looking forwards to every branch of science auto-Sokaling themselves.
|
# ? Feb 15, 2024 17:36 |
|
sinky posted:i'm 'Retat' What the Hell was the prompt here? "Give me a rat with a giant
|
# ? Feb 15, 2024 20:20 |
|
OpenAI announced a new text-to-video model with some very impressive (albeit certainly cherry picked) examples: https://openai.com/sora
|
# ? Feb 15, 2024 20:44 |
|
Damnit Frontiers, stop giving open access journals a bad name!
|
# ? Feb 15, 2024 20:49 |
|
Cicero posted:OpenAI announced a new text-to-video model with some very impressive (albeit certainly cherry picked) examples: https://openai.com/sora Oh boy, this can only lead to good things in an election year! Looking forward to it being even easier to spread misinformation online while making the livelihoods of video production workers even more horrible! /s
|
# ? Feb 15, 2024 21:44 |
|
Testtomcells are concentrated in the butt.
|
# ? Feb 16, 2024 06:36 |
|
Cicero posted:OpenAI announced a new text-to-video model with some very impressive (albeit certainly cherry picked) examples: https://openai.com/sora This gives me such a bad gut feeling. People are going to do awful stuff with this.
|
# ? Feb 16, 2024 07:14 |
|
yeah it's all over once we have realistic video AI The Internet will no longer be reliable for anything, and the great Logging Off will begin.
|
# ? Feb 16, 2024 08:22 |
|
The thing about AI video is that there's a way you can present content that establishes your reputation as an information source that will further guarantee the trust you build. Were I a political party, say, I'd just go with this: -We have a site where you can view any video of our candidates or reps, and we guarantee none of it will have ever been touched by AI. -You then enforce a policy whereby you don't work with vendors or any candidate or rep who will not agree to not use any machine-generated content of any type. -You rigorously enforce that and police it and make a big loving deal when you catch people violating the agreement. -You also pivot to a more policy-focused set of priorities, with genuine metrics for how the public who you're inviting to trust you know that you're hitting those goals. You give them a dashboard they can check on, how close are we to the things we're working on? You can fake video, you can fake audio, you can fake a whole press kit explaining how great the new road that doesn't actually exist that you just cut the ribbon for is. But you cannot argue with "we said you'd have free health care, and now you do" or "we said weed would be legal, and now it is" or "we said you'd get your college debt forgiven, and now it is". The ability to present material, verifiable proof that you improved a voter's life will overpower any potential wariness of being scammed by fake announcements or fake dramatic reveals. Take the personalities and media training and PR talk and slickness away from the whole thing, and make it about results. If anything can be faked except the experience of a voter navigating society, then you better figure out how to change that single voter's perception in a way that cannot be faked, so they can never feel cheated believing you were going to fix something you never did. This won't work because the temptations are too great. You won't ever be able to find a group of marketing professionals and content people and political actors who are all so committed to the urgency of creating a reliable source for verified-real content that this ever works. The temptation to use just a few seconds of generated stock Attractive Family You Identify With Based On Your Marketing Profile is just too great, the initial KPIs and reports your consultants generate for you too good. The problem isn't that some people will use this technology for bad ends. It's that nobody with the cash to make the decisions will be able to be relied on to resist whatever short-term incentives there are to use it. The principles just don't matter to people who matter enough to make the decisions. selec fucked around with this message at 18:43 on Feb 16, 2024 |
# ? Feb 16, 2024 18:40 |
|
This will basically destroy any remaining credibility that videos have on social media. There will be zero cost to create inflammatory, viral propaganda videos to serve whatever purpose you desire. “A Biden supporter stealing ballots out of a ballot box in a conservative district” “A black man assaulting a white woman in broad daylight” “A CNN broadcast showing a person wearing a MAGA hat shooting up a school” You will have to decide which people and institutions you trust implicitly since nothing will be independently verifiable outside of what you have seen with your own eyes. Anything outside of your trust network will automatically discarded. Our news and media landscape will become even more fragmented and disconnected as these trust networks develop. This is already happening somewhat since there have been plenty of manipulated or heavily edited videos out there. But the rate was slow enough that it was possible for people to independently verify videos. You could see someone on Twitter debunking a fake Ukrainian war crime, for example. Now with genAI there will be a flood of fake videos making independent verification impossible. The default will be that everything is fake unless someone you trust says it is true.
|
# ? Feb 16, 2024 21:41 |
|
I think it will be quite some time before AI generated videos are convincing. For the time being, I don’t think that it would be broadly more useful than the animation equivalent of tweening. That isn’t nothing, or course, but it doesn’t mean that all video is untrustworthy and everything is lost.
|
# ? Feb 17, 2024 00:20 |
|
cat botherer posted:I think it will be quite some time before AI generated videos are convincing. This is true of everyone posting in this thread, but countless boomers on Facebook are already engaging with AI pictures thinking they're real. Not all generations have equal media literacy.
|
# ? Feb 17, 2024 01:44 |
|
A scary bit in their technical report its able to extend videos forward or backwards in time. Like its outpainting temporally. The examples are all stated to be extended from generated content but I can't imagine a technical reason why real video couldn't be extended if not with this than with something like it. I can imagine the hit rate of fooling someone into believing ai generated video is real jumps up significantly if you prepend it with real video.
|
# ? Feb 17, 2024 01:49 |
|
D O R K Y posted:A scary bit in their technical report its able to extend videos forward or backwards in time. Like its outpainting temporally. The examples are all stated to be extended from generated content but I can't imagine a technical reason why real video couldn't be extended if not with this than with something like it. Doing a "Greedo shot first" edit but to actual CCTV footage of a crime
|
# ? Feb 17, 2024 03:04 |
|
smoobles posted:This is true of everyone posting in this thread, but countless boomers on Facebook are already engaging with AI pictures thinking they're real. Not all generations have equal media literacy. Also true of insane rants on random blogs or social media. AI generated content, whether it's text, image or video, might exacerbate issues we have been dealing with on the internet for 20-30 years but it is not a new issue.
|
# ? Feb 17, 2024 03:53 |
|
Replacing customer support with AI chatbots going about as expected. https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot quote:After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline's bereavement travel policy.
|
# ? Feb 17, 2024 07:59 |
|
Seph posted:This will basically destroy any remaining credibility that videos have on social media. There will be zero cost to create inflammatory, viral propaganda videos to serve whatever purpose you desire. That we'll figure out how to make tools that can visually create anything with relative ease was an inevitability once cavemen started dragging charcoal across cave walls, if it wasn't AI it was going to be real time 3D game engines. Though ultimately it will probably be a tool that combines both. There are people that still believe Obama was born in Kenya, humans have been ignoring evidence of truth that doesn't align with their hard-set preconceptions since the beginning of civilization. You say it's already happening somewhat, I think it has already happened entirely. Will things get worse? It'll definitely cause people to trust video at face value even less than they do now. Maybe that's a good thing, because to be frank, no one ever really should have been doing that in the first place. SCheeseman fucked around with this message at 09:46 on Feb 17, 2024 |
# ? Feb 17, 2024 09:35 |
|
OneEightHundred posted:Replacing customer support with AI chatbots going about as expected. quote:the airline should not be liable for the chatbot's misleading information because Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions,"
|
# ? Feb 17, 2024 13:32 |
|
The AI chatbots have a better union.
|
# ? Feb 17, 2024 13:37 |
|
sinky posted:The AI chatbots have a better union. Right now the AI chatbots represent capital, not labor, so of course they have special privilege.
|
# ? Feb 17, 2024 15:32 |
|
The Artificial Kid posted:
I hope that a judge disnotion this idea with extreme prejudice, because ...Oh boy, what a escapegoat to do lie through their teeth and get away with it.
|
# ? Feb 17, 2024 17:27 |
|
Tei posted:I hope that a judge disnotion this idea with extreme prejudice, because ...Oh boy, what a escapegoat to do lie through their teeth and get away with it. "We don't lie to our customers, we merely created a machine to lie to our customers. Checkmate."
|
# ? Feb 17, 2024 18:24 |
|
smoobles posted:This is true of everyone posting in this thread, but countless boomers on Facebook are already engaging with AI pictures thinking they're real. Not all generations have equal media literacy. If you post a picture of sometime with a caption of a quote, people will believe it's something they said. This problem goes way beyond AI
|
# ? Feb 17, 2024 18:28 |
|
Could you please just rename this loving thread.
|
# ? Feb 17, 2024 18:29 |
|
Al! posted:Could you please just rename this loving thread. Most people would be flattered by all the attention, you know.
|
# ? Feb 17, 2024 19:04 |
|
Seph posted:This will basically destroy any remaining credibility that videos have on social media. There will be zero cost to create inflammatory, viral propaganda videos to serve whatever purpose you desire. Sounds like an improvement tbh. A social media where the first post under every video is always "source your video or it didn't happen" may be better than what we have now. I'm coming here after reading a Twitter post where some random guy posted a video he claims shows Navalny and an MI6 agent plotting a coup against Putin. None of the people in the video look even remotely like Alexei Navalny nor do they discuss anything suspicious, illegal or unusual, yet the replies are 90% ultra gullible people going on about how he deserved to be in jail for this and what a scandal it is that the lamestream media is not reporting it. It would be a huge improvement if more of these people were conditioned to outright dismiss the video, unless someone who can actually Google what Navalny looks like or form a coherent argument why the discussed content is suspicious (aka a journalist, even if it's a right wing one) presents it. It's how the world mostly used to work before the internet. You couldn't just watch some random videos with unknown origins and context.
|
# ? Feb 17, 2024 22:02 |
|
|
# ? Jun 8, 2024 07:47 |
|
I wonder if this "AI Prompt Writer" might be interested in this idea I'm developing, a sort of right... of copy? Regardless, this completely hilarious self-own is so rich I could drizzle it on a pancake.
|
# ? Feb 18, 2024 03:10 |