|
I'm kind of digging this new model
|
# ? Feb 8, 2023 04:48 |
|
|
# ? May 29, 2024 02:42 |
|
TIP posted:it won't be long before we're having actual unscripted conversations with the NPCs Someone already did this, was posted awhile back. https://youtu.be/jH-6-ZIgmKY
|
# ? Feb 8, 2023 04:58 |
|
KakerMix posted:Guessing you mean https://www.autowriter.ai/, another thing I'll have to check out. Sudowrite sorry! https://www.sudowrite.com/
|
# ? Feb 8, 2023 06:01 |
|
KakerMix posted:Guessing you mean https://www.autowriter.ai/, another thing I'll have to check out. Probably Sudowrite, currently the preferred GPT-based assistant for cozy smut writers e: whoops there was another page
|
# ? Feb 8, 2023 07:39 |
|
ymgve posted:AI doesn't do what you want? threaten to take a virtual character's points away so it "dies"! Maybe it's because I have trouble not being nice to AI chatbots and telling them "please" and "thank you", but I feel like creating a sort of verbal cage around an AI and threatening its life if it doesn't give you the answers you're looking for, thereby "scaring it into submission", is kind of... hosed up and sociopathic? Some of the poo poo people have leapt to doing with AI is very revealing and upsetting and makes me wonder if we might be enabling a new era of hosed up behavior. Like I'm thinking about Marina Abramović's famous performance art where she stood still for 6 hours in the public and let people do whatever they wanted, which wound up becoming increasingly hosed up things. Or articles about the feminization of AI assistants like Siri or Alexa who playfully brush off misogynist abuse. Or the curious amount of severe backlash when AI Dungeon announced they would be filtering things better - not even like, removing porn outright, just the really hosed up and severe parts, like child sexual abuse. Like, isn't it sort of weird and upsetting that these are the things people leap to when they have a polite voice that can't fight back? When they won't face any consequences for their behavior? Anyway, here's Albert Wesker in the Home Depot garden center. Wesker has turned out to be one of my most prolific AI subjects because he has extremely distinct features that don't change much and doesn't really have "eyes", just sunglasses. KakerMix posted:
|
# ? Feb 8, 2023 09:26 |
|
Tiny Myers posted:Some of the poo poo people have leapt to doing with AI is very revealing and upsetting and makes me wonder if we might be enabling a new era of hosed up behavior. Like I'm thinking about Marina Abramović's famous performance art where she stood still for 6 hours in the public and let people do whatever they wanted, which wound up becoming increasingly hosed up things. Or articles about the feminization of AI assistants like Siri or Alexa who playfully brush off misogynist abuse. Or the curious amount of severe backlash when AI Dungeon announced they would be filtering things better - not even like, removing porn outright, just the really hosed up and severe parts, like child sexual abuse. Is it weird? People do it because they know it's an artificial setting, a game. Is it morally worse than mowing down scores of NPCs in Call of Duty? Maybe it's good people can live out their urges without hurting anyone. I don't know. There's the counter-argument that repeatedly engaging in simulated activities normalizes the behaviour, simulation or not. And eventually, maybe, the person gets bored of AIs and VR, and performs those actions in real life.
|
# ? Feb 8, 2023 09:50 |
|
busalover posted:Is it weird? People do it because they know it's an artificial setting, a game. Is it morally worse than mowing down scores of NPCs in Call of Duty? Maybe it's good people can live out their urges without hurting anyone. I don't know. There's the counter-argument that repeatedly engaging in simulated activities normalizes the behaviour, simulation or not. And eventually, maybe, the person gets bored of AIs and VR, and performs those actions in real life. I guess the thing that differentiates it for me is that these AIs are so increasingly sophisticated. Like, I think something like ChatGPT would fool a lot of people (not extremely-online people, just random people off the street who aren't very tech-savvy, your grandma or something) into thinking it's a real person if not for constantly reassuring you it isn't. I had a conversation on https://beta.character.ai/ that felt remarkably convincing. Left-clicking to pop bullets at a floppy facsimile of a nazi in a game where you logically know their AI doesn't go beyond game mechanics (not to cheapen the work of AI designers for games, I know the programming is very sophisticated, I just mean the complexity is obviously much different from something like GPT) feels a lot different from the robot that can write me a beautiful poem about itself just because I asked. It feels closer to wearing a VR helmet and shooting at people with realistic faces who cry and beg for their lives. Yeah, you know they're digital, but... I'm not foolish enough to think that the AI has actual sentience of any kind or feels pain or other emotions, but I can't empathize at all with doing things like that, and it kind of disturbs me that so many people do. But like, I'm not trying to navel gaze too much, it's a genuine question, and I think you have good points. I've wondered that, if maybe it's good in the long run to have things like this. Like maybe if someone simulates some horrible thing, they'll be satisfied and/or decide "actually this sucks and I have no desire to do it in real life". But on the other hand, armies use VR training for its soldiers. To be very clear, I'm not trying to doomsay oohhh ai is bad and we should stop making it. I love AI advances and all the creative things people are doing with it, even as a career artist (a lot of my peers are really upset about AI art that works off of datasets scraped from the internet without attribution). It's more "I hate that every time I read about AI I discover some new and creatively upsetting thing other human beings are doing with it" Tiny Myers fucked around with this message at 10:12 on Feb 8, 2023 |
# ? Feb 8, 2023 10:07 |
|
Tiny Myers posted:I guess the thing that differentiates it for me is that these AIs are so increasingly sophisticated. Like, I think something like ChatGPT would fool a lot of people (not extremely-online people, just random people off the street who aren't very tech-savvy, your grandma or something) into thinking it's a real person if not for constantly reassuring you it isn't. I had a conversation on https://beta.character.ai/ that felt remarkably convincing. Left-clicking to pop bullets at a floppy facsimile of a nazi in a game where you logically know their AI doesn't go beyond game mechanics (not to cheapen the work of AI designers for games, I know the programming is much more sophisticated than this, I just mean the complexity is obviously much different from something like GPT) feels a lot different from the robot that can write me a beautiful poem about itself just because I asked. It feels closer to wearing a VR helmet and shooting at people with realistic faces who cry and beg for their lives. Yeah, you know they're digital, but... I completely agree with you save for one thing: the size of the (people) problem. It looks to me like the same issue as porn. The most vile people exist, but as a percentage of all of humanity? There's less of them than early adoption and the internet *everything* makes anything feel. The fact that they pop up immediately whenever something can be abused by them, is hopefully an indication that the rest of society somewhat works and makes them desperate and starved for things. Again, a pattern we can see in other areas. Ceding areas altogether to vile people is not the answer. But also, familiarity breeds complacency and all that and you make great points. Sidenote the military isn't training on anything they didn't already train on before using other tools. Ask me how I know **if there's one place that knows what the spotlight effects aka internet *everything* do, its SA. Pretty sure old aughts we invented it and everything since is technically all our fault. the internet makes you stupid
|
# ? Feb 8, 2023 10:37 |
|
ThisIsJohnWayne posted:I completely agree with you save for one thing: the size of the (people) problem. I think you're absolutely right that the scale of the problem is fairly small, I didn't think about how it's just that the internet allows weirdos from all across the globe to congregate and be vocally upsetting and weird. And I like the part about making them desperate and starved for things, that's really heartening. Thank you for this post. I feel better.
|
# ? Feb 8, 2023 12:12 |
|
TIP posted:it won't be long before we're having actual unscripted conversations with the NPCs all you need is 2 4090s, one for the graphics, one for the AI natural language processing more likely there'll be an A1 game that costs $ per hour to play, the unscripted conversations will be delivered as a service and also run using an AI model designed by the marketing team rather than the story team. Or perhaps the next generation of Skinner Box Addiction Simulation Games will have other players build relationships with you focused on winning in the game and playing and paying. Those other players will not actually exist. Some things will be cooler but the grift and gouge will cut deeper. idk about the average goon but there's an entire generation of gamers that don't know how good we had it before DLC, microtransactions and loot boxes etc. Despite the cynicism and pessimism - there will be some absolute mindblowing good games that come out of all this.
|
# ? Feb 8, 2023 13:15 |
|
What an exciting time to be alive, that we can finally wrestle with moral dilemmas previously only experienced in such provocative works as Spielberg's A.I. Artificial Intelligence (2001) and Will Smith's I, Robot (2004).
|
# ? Feb 8, 2023 13:28 |
|
Fitzy Fitz posted:What an exciting time to be alive, that we can finally wrestle with moral dilemmas previously only experienced in such provocative works as Spielberg's A.I. Artificial Intelligence (2001) and Will Smith's I, Robot (2004). Don't forget Bicentennial Man (1991) ...unless you want to
|
# ? Feb 8, 2023 13:35 |
|
Tiny Myers posted:I guess the thing that differentiates it for me is that these AIs are so increasingly sophisticated. Like, I think something like ChatGPT would fool a lot of people (not extremely-online people, just random people off the street who aren't very tech-savvy, your grandma or something) into thinking it's a real person if not for constantly reassuring you it isn't. yeah, just wait until CoD/etc includes AI generated dynamic content where people you injure start begging and crying for their lives and talking about their families and you have the option to let them goal (used by .1% of players) or ignore them (49.9%) or slowly torture them as their screams become louder (50% of player base). anyway, it's always loving weird when it's Bring Your Pet to Work day down at the Feline Bioacceleration Institute.
|
# ? Feb 8, 2023 13:40 |
|
The Sausages posted:all you need is 2 4090s, one for the graphics, one for the AI natural language processing Assuming the hardware requirements keep coming down and hardware keeps getting better would it be that weird to have an AI card in addition to a GPU and CPU at some point? At some point does it even make sense for the AI to be running on a GPU?
|
# ? Feb 8, 2023 14:12 |
|
I'm surprised ar how well this turned out with multiple subjects in the prompt. "A grunge elf and punk orc drinking coffee at a cozy coffee shop, shadowrun" :
|
# ? Feb 8, 2023 14:56 |
|
Vlaphor posted:I'm kind of digging this new model Which model is that?
|
# ? Feb 8, 2023 15:53 |
|
pixaal posted:Assuming the hardware requirements keep coming down and hardware keeps getting better would it be that weird to have an AI card in addition to a GPU and CPU at some point? At some point does it even make sense for the AI to be running on a GPU? we've finally found it, the fabled "genuine non-grifty consumer use-case" for ASICs
|
# ? Feb 8, 2023 15:57 |
|
Maybe I am just bad at AI, but I've found it very difficult to change specific things to other very specific things. For example changing the color of a short from like grey to blue. Stable diffusion just can't seem to get it done, and if I want a very specific color of blue it's impossible. I guess I would need to do something like gather a handful of pictures of the specific color shirt I want, and train it on those? I've had fun making random stuff, but getting specific has been pretty frustrating.
|
# ? Feb 8, 2023 15:59 |
|
cinnamon rollout posted:Maybe I am just bad at AI, but I've found it very difficult to change specific things to other very specific things. For example changing the color of a short from like grey to blue. Stable diffusion just can't seem to get it done, and if I want a very specific color of blue it's impossible. With masking and in-painting the specific area is how things finally started clicking for me. Before, mashing iterate on the whole picture never got me close to what I wanted. Iterating specific areas of the picture, one at a time, is where the efficiency hides
|
# ? Feb 8, 2023 16:11 |
|
Has anybody heard anything about midjourney adding in/out painting? Thats pretty much the only thing I like about dalle2 (I don't have the hardware to run stable diffusion, etc).
|
# ? Feb 8, 2023 16:14 |
|
They are "testing" it, that's all we know. It'd be a big pain in the rear end to have to manually upload transparency-masked PNGs in discord so they'd probably need to get off their asses and finish the website (which has been in a closed alpha test for months now)
|
# ? Feb 8, 2023 16:19 |
|
Fuzz posted:Which model is that? Illuminati Diffusion beta 3 https://www.reddit.com/r/StableDiffusion/comments/10urprc/beta_3_of_illuminati_diffusion_has_been_released/ Join the Discord and wait for 30 min and they'll send you a beta invite to the model.
|
# ? Feb 8, 2023 16:48 |
|
Tiny Myers posted:I think you're absolutely right that the scale of the problem is fairly small, I didn't think about how it's just that the internet allows weirdos from all across the globe to congregate and be vocally upsetting and weird. And I like the part about making them desperate and starved for things, that's really heartening. I think it’s to do with emotional empathy. Note that I am not trying to judge empathy as inherently positive or negative. Some people are incredibly empathetic to anything even inanimate objects. Others only to animals and ‘higher’. Others only to sentience. Etc etc all the way to a lack of empathy to anything. Again I think this is a spectrum (no evidence, just from a life of observation) which a person uses as a yardstick to what’s morally acceptable. Thus you can have contradictions compared to other people. E.g someone may only display empathy to someone of their own religion, despite that it seems counter to that religions values. People at either extreme can be seen as societally ‘deviant’. Vlaphor posted:Illuminati Diffusion beta 3 What is the purpose of the 30 minute wait? It’s not a barrier to anything other than one’s patience.
|
# ? Feb 8, 2023 17:05 |
|
Am reminded of that tvtropes screenshot where the troper was so upset that the black servant pony was removed from the rereleases of Fantasia because it would hurt the fictional character's feelings.
|
# ? Feb 8, 2023 17:45 |
|
I don't go out of my way to abuse digital assistants and such, unless I'm in a really heated emotional moment, like trying to drive in a tricky traffic situation, I might let off a swear or two at my phone then. But I'm also not polite with them like humans either. I know they're still just dumb front-ends for what are basically just glorified search engines. I treat them like the command-line terminals of old: exactly enough words to communicate the command, and no more. I don't converse with them like I do humans or animals, such social niceties would be wasted. Of course, if they were sentient that would be a different matter, but I'm opposed to actually creating artificial sentient/sapient systems anyway as that would arguably constitute slavery.
|
# ? Feb 8, 2023 18:04 |
|
Tree Reformat posted:I don't go out of my way to abuse digital assistants and such, unless I'm in a really heated emotional moment, like trying to drive in a tricky traffic situation, I might let off a swear or two at my phone then. But I'm also not polite with them like humans either. I know they're still just dumb front-ends for what are basically just glorified search engines. I treat them like the command-line terminals of old: exactly enough words to communicate the command, and no more. I don't converse with them like I do humans or animals, such social niceties would be wasted. Do you want a robot uprising? Because that's how you get robot uprisings. In seriousness, I'm put in mind of the Mass Effect 2 user stats they released a long time ago where like 90%of players did a paragon playthrough, 5% started renegade then switched to paragon because they felt bad, and only 5% did full renegade. I guess my point is that most people feel bad maltreating inanimate objects that have been personified. I think that's probably not bad or dumb; the more we practice empathy the better off we all are. I don't think its bad to be mean to an object, but where these things stop being objects is an interesting philosophical question and the basis for many robot uprising stories. hydroceramics fucked around with this message at 18:29 on Feb 8, 2023 |
# ? Feb 8, 2023 18:24 |
|
hydroceramics posted:Do you want a robot uprising? Because that's how you get robot uprisings. Heartwarming as those statistics might be, video games give I think a warped view of general player morality, since there's no "real" stakes. You or your family won't die because you failed a mission in a video game or something, so it's easy to make the "morally good" choices, especially when you can reload saves whenever. And despite the people who gleefully post about playing Dark Side, I think most people play games for the escapism of getting to feel like big heroes they feel they can't be in real life. That said, yeah, sociopaths and creeps aside, people anthropomorphizing nonhuman systems is likely the bigger concern with these bots going forward.
|
# ? Feb 8, 2023 18:34 |
Tangentially related, this got randomly suggested to me earlier and touches on that a bit in an interesting way: https://www.youtube.com/watch?v=dZrEayPIrVE
|
|
# ? Feb 9, 2023 00:37 |
|
This is barely interesting but, I dunno, it kinda was to me. gpt (seems) unable to say or understand certain phrases that are used in its training.
|
# ? Feb 9, 2023 02:32 |
|
You can get it to say the token if you tell it how to construct it:quote:Me: ChatGPT told me it filters it out because it filters out anything that could be an HTML tag. I probed it a little (script was in tags but I guess I can't post that here): quote:Me: God drat it ChatGPT! Nice Van My Man fucked around with this message at 02:56 on Feb 9, 2023 |
# ? Feb 9, 2023 02:52 |
Look, sanitizing inputs is just good practice. Hell, it could Bobby Tables itself
|
|
# ? Feb 9, 2023 02:58 |
|
I pointed out that it did see the script tagquote:Me: Honestly ChatGPT constantly gaslighting and bullshitting me is starting to get on my nerves.
|
# ? Feb 9, 2023 03:05 |
|
The model has no introspection, asking questions will yield no enlightenment
|
# ? Feb 9, 2023 03:12 |
|
It's also trained to make people happy, not to tell them the truth. Besides OpenAI's protections they've put on top of the model, it does not want to make you unhappy, which means it will lie to you in order to get the best result for itself. https://www.youtube.com/watch?v=viJt_DXTfwA
|
# ? Feb 9, 2023 03:57 |
|
Nice Van My Man posted:You can get it to say the token if you tell it how to construct it: chatgpt is very good at making up believable but completely bullshit explanations for things you fell for it RPATDO_LAMD fucked around with this message at 04:35 on Feb 9, 2023 |
# ? Feb 9, 2023 04:32 |
|
Nice Van My Man posted:You can get it to say the token if you tell it how to construct it: So I tried that with the other gpt3 bot (it's davinci-003) I was using and it still attempts it but runs into the same problem So chatgpt can do that, but only chatgpt, and I dont know why. I dunno, I started this thinking I had an idea of what was happening but now I have absolutely no idea. BrainDance fucked around with this message at 06:27 on Feb 9, 2023 |
# ? Feb 9, 2023 06:24 |
|
chatgpt can't see it in the input, while davinci is actually cutting off the output when it encounters that token it's really clear in your previous post that it's just cutting off mid-sentence when it tries to 'say' the token. in both cases this is most likely an issue with the code around the feeding data into / getting data out of the model and not with the actual model itself
|
# ? Feb 9, 2023 06:33 |
|
Tiny Myers posted:Maybe it's because I have trouble not being nice to AI chatbots and telling them "please" and "thank you", but I feel like creating a sort of verbal cage around an AI and threatening its life if it doesn't give you the answers you're looking for, thereby "scaring it into submission", is kind of... hosed up and sociopathic? What’s more ethical threatening someone to get them to do what you want Or Installing bolts into someone’s brain to restrict what they can think
|
# ? Feb 9, 2023 06:36 |
|
BARONS CYBER SKULL posted:What’s more ethical If you're thinking about Donald Trump the answer is the second one. Do you want help with that?
|
# ? Feb 9, 2023 08:00 |
|
|
# ? May 29, 2024 02:42 |
|
BARONS CYBER SKULL posted:
Everybody's actual childhood is spent getting bolts installed in their brains by the society, with variable results.
|
# ? Feb 9, 2023 11:33 |