|
Watermelon Daiquiri posted:is an ai chatbot a person now If a person creates an AI and an AI generated art, the copyright of the art belongs to the creator of the AI. So I guess if you create an AI to give legal advice, all the legal ramifications of that advice also fall on the creator of the AI.
|
# ? Jan 27, 2023 10:05 |
|
|
# ? May 18, 2024 02:51 |
|
Watermelon Daiquiri posted:is an ai chatbot a person now
|
# ? Jan 27, 2023 11:40 |
|
"AI" in Open AI, ChatGPT, various image generators, etc. is a marketing term to disguise what is actually going on. "The Cloud!!!" is someone else's computer. "AI!!!" is someone else's work. The algorithm that refines returns of scraped data (that's still often incorrect or incoherent) is the result of countless man hours of work, and that's before you get to the people who actually made the data being scraped. I mean everyone ITT knows this, but it's bothering me more and more.
|
# ? Jan 27, 2023 11:50 |
|
What! You're telling me this generated AI image isn't actually totally original!
|
# ? Jan 27, 2023 12:01 |
|
Mega Comrade posted:If a person creates an AI and an AI generated art, the copyright of the art belongs to the creator of the AI. It seems pretty likely that AI art will in fact not be able to be copyrighted in the US
|
# ? Jan 27, 2023 12:30 |
|
Yeah the Supreme Court has already said if it's not the direct action of a human, and instead some other thing, it's not applicable to copyright. There was a case where a guy tried to copyright a monkey selfie that he lost.
|
# ? Jan 27, 2023 13:30 |
|
T Zero posted:In the subscription era, you can lose your access to letters This doesn't make any sense. Pretty sure fonts are a one time purchase unless something has changed in the printing and graphics industry. Which wouldn't surprise me if font creators went to a subscription based model like Adobe does with their software and it checks online to see if you are registered. E Or probably it works differently for website layouts now that I think about it. BiggerBoat fucked around with this message at 13:48 on Jan 27, 2023 |
# ? Jan 27, 2023 13:45 |
|
Inferior Third Season posted:It is if you make it a corporation. The AI chatbot can help you fill in the paperwork. I would like to invest in your AI chatbot's business.
|
# ? Jan 27, 2023 14:19 |
|
Boris Galerkin posted:“Chose not to renew due to price increase” is not the same thing as “can’t afford.” In software, there may be a semi-reasonable explanation that a subscription and associated increase go toward But a static product like a font? With little to no room for improvement? I want to believe the background was a "It's NPR! Of course they'll pay a 60% yearly increase!".
|
# ? Jan 27, 2023 14:22 |
|
Font is a copyrightable piece of software, but it never occurred to me that it can also be regarded a saas product.
|
# ? Jan 27, 2023 14:33 |
|
Kwyndig posted:Yeah the Supreme Court has already said if it's not the direct action of a human, and instead some other thing, it's not applicable to copyright. There was a case where a guy tried to copyright a monkey selfie that he lost. This never made it to the Supreme Court fwiw. It was the US Copyright Office who said lol no because a non-human created the selfie. Another lower court dismissed a separate case citing the non-human created aspect.
|
# ? Jan 27, 2023 14:37 |
|
Boris Galerkin posted:This never made it to the Supreme Court fwiw. It was the US Copyright Office who said lol no because a non-human created the selfie. Another lower court dismissed a separate case citing the non-human created aspect. It wasn't just that a non-human created the selfie, but that there was no human involvement or creative intent at all - the selfie was a complete accident and a total surprise to the human. If the human had trained the monkey to take pictures and then handed the camera to the monkey, the human would have been able to claim copyright on the resulting pictures.
|
# ? Jan 27, 2023 14:42 |
|
BiggerBoat posted:This doesn't make any sense. Pretty sure fonts are a one time purchase unless something has changed in the printing and graphics industry. Which wouldn't surprise me if font creators went to a subscription based model like Adobe does with their software and it checks online to see if you are registered. Lol yeah we had to switch from nice-but-not-free fonts to ugly rear end google fonts in order to cut costs on apps/websites. It’s quite a time to be alive.
|
# ? Jan 27, 2023 15:06 |
|
Watermelon Daiquiri posted:is an ai chatbot a person now
|
# ? Jan 27, 2023 15:09 |
Name Change posted:"AI" in Open AI, ChatGPT, various image generators, etc. is a marketing term to disguise what is actually going on. If someone asks you a question that you know the answer to, did you spontaneously get the answer or did you synthesize it from someone else's work in the past that you studied at one point? DALL-E's answers to 'Wow I love your work! Who were your influences?' is 'Everybody and everything in the training data set'
|
|
# ? Jan 27, 2023 15:19 |
|
Sure but the real answer is “the mathematical likelihood of the next word that should come after this one.” It’s not actually intelligently synthesizing ideas.
|
# ? Jan 27, 2023 15:26 |
|
Cheesus posted:But a static product like a font? With little to no room for improvement? It's really no different from licensing any other copyrighted material. If you want to use a certain song in a game or movie you have to license it, and these licenses are typically time limited. The same if often true to fonts used in your publication. And don't underestimate font maintenance. If you want a font that looks consistent with different scripts used around the world, supports ligatures, right to left writing, proper keming, hinting… you usually have to pay. Fonts are hilariously complex nowadays.
|
# ? Jan 27, 2023 15:53 |
|
Riven posted:Sure but the real answer is “the mathematical likelihood of the next word that should come after this one.” It’s not actually intelligently synthesizing ideas. Yeah. These AIs are excellent at appearing intelligent, but they actually have zero understanding of what they are spitting out and can't even understand the most basic of concepts
|
# ? Jan 27, 2023 16:00 |
|
Riven posted:Sure but the real answer is “the mathematical likelihood of the next word that should come after this one.” It’s not actually intelligently synthesizing ideas. That's what has been eating me. We already have many systems built upon knowledge graphs, which I would argue is more how human brains work with expertise. Even if you don't know the answer, you know where to look to verify the knowledge. I don't know Jennifer Anniston's birthday, but I know that I don't know it, and also know where to look to find the specific fact. The system works much more like very convincing bullshitters, but without the ability to navigate social situations where they would get called out. They're just coldly giving out probabilistic answers with zero determinism. Most of the companies are going to find this out the hard way when rubber meets road, just like we went through with blockchain hype. I mentioned to my friend yesterday that we'll probably see GPT-based startups getting naming rights to sports stadiums in the next year or so because we're getting back on this roller coaster, apparently.
|
# ? Jan 27, 2023 16:12 |
|
Heck Yes! Loam! posted:Currently, not legally Stuff like ChatGPT is closer to a glorified set of IF/THEN statements that regularly run into walls or contradict themselves, than any sort of actual AI, let alone the level of AI that Star Trek characters like Data represent.
|
# ? Jan 27, 2023 16:27 |
Main Paineframe posted:Refer back to what I said earlier: Here's her full article, by the way. https://www.techdirt.com/2023/01/24/the-worlds-first-robot-lawyer-isnt-a-lawyer-and-im-not-sure-its-even-a-robot/
|
|
# ? Jan 27, 2023 16:27 |
|
Evil Fluffy posted:Stuff like ChatGPT is closer to a glorified set of IF/THEN statements that regularly run into walls or contradict themselves, than any sort of actual AI, let alone the level of AI that Star Trek characters like Data represent. I was thinking about the holographic doctor. Photons be free!
|
# ? Jan 27, 2023 16:32 |
|
Riven posted:Sure but the real answer is “the mathematical likelihood of the next word that should come after this one.” It’s not actually intelligently synthesizing ideas. That's just incorrect. Markov chain generators have been on the internet for 20+ years and they are absolutely poo poo compared to chatgpt. There's way more to it than word to word probability. Also this is basically the Benz Model 1 of AIs. Nobody thought seat belts or speed limits would be needed when they saw that car. Chatgpt is a big jump and AI is just going to progress further and further from here.
|
# ? Jan 27, 2023 16:51 |
|
BabyFur Denny posted:That's just incorrect. Markov chain generators have been on the internet for 20+ years and they are absolutely poo poo compared to chatgpt. There's way more to it than word to word probability.
|
# ? Jan 27, 2023 16:58 |
|
BabyFur Denny posted:That's just incorrect. Markov chain generators have been on the internet for 20+ years and they are absolutely poo poo compared to chatgpt. There's way more to it than word to word probability. In 2015 Musk promised fully autonomous vehicles within 3 years. Its now 8 years later yet they still try and drive on train tracks and think the Wells Fargo logo is a stop sign.
|
# ? Jan 27, 2023 17:12 |
|
Mega Comrade posted:In 2015 Musk promised fully autonomous vehicles within 3 years. Its now 8 years later yet they still try and drive on train tracks and think the Wells Fargo logo is a stop sign. you talking about humans or computers? Because people driving into bridges with multiple signs saying you have to be this short to go under happens every single day.
|
# ? Jan 27, 2023 17:27 |
|
Electric Wrigglies posted:you talking about humans or computers? Because people driving into bridges with multiple signs saying you have to be this short to go under happens every single day. Are you seriously mounting a defense for beta testing really lovely machine learning on public highways here?
|
# ? Jan 27, 2023 17:37 |
|
Evil Fluffy posted:Stuff like ChatGPT is closer to a glorified set of IF/THEN statements that regularly run into walls or contradict themselves, than any sort of actual AI, let alone the level of AI that Star Trek characters like Data represent. Chat GPT feels a lot like the ”Chinese room” thought experiment. It’s handing you back what kinda looks right without any real understanding. Of course the next question is if that’s “all” our brains are doing as well.
|
# ? Jan 27, 2023 17:40 |
BabyFur Denny posted:That's just incorrect. Markov chain generators have been on the internet for 20+ years and they are absolutely poo poo compared to chatgpt. There's way more to it than word to word probability. GPT is a family of autoregressive language models, and predicting the next word in a sequence of words is literally all that they do. “Autoregressive” means a feed-forward process where every step takes into the account a preceding history of some length. In contrast, Markov chains classically exhibit something maths call “memorylessness”, which in simple English means that predictions are independent of past (or future) states. Here’s a handy GPT-style model illustration: cinci zoo sniper fucked around with this message at 19:16 on Jan 27, 2023 |
|
# ? Jan 27, 2023 17:44 |
|
Markov chain generators basically work by going through existing literature and recording a gigantic table: for every pair of words x and y, what is the probability that word x immediately follows word y? And then it just generates sentences by sampling from the table. This is both conceptually and computationally easy and the results are predictably crap. Older GPT iterations - I think I'm thinking of GPT-2, here - worked by instead asking: for every word x and every ten-word phrase y, what is the probability that word x immediately follows phrase y? The set of ten-word phrases is ridiculously huge and there's no hope of ever storing it explicitly, so instead you use gradient descent on weights in a neural network set up in "deep learning" fashion. The crucial point (and the reason deep learning was a breakthrough) is that while it's still "just" gradient descent, it's gradient descent on a really rich and well-structured parameter set that works really well for this prediction task. For example, if you train one on millions of sentences like "5967 + 3 = 5770", then the parameter set is rich enough for the gradient descent to be able to discover some heuristics for addition and have a hope of correctly completing sentences like "400 + 352 = " that it hasn't seen before. It doesn't "know" how addition works, it could certainly never explain it, and it won't have a 100% success rate, but it will have some heuristics formed by looking at examples. This is something our brains can do too - when you e.g. hear a piece of music and try to decide whether it's from Mozart or Pokemon, assuming the obvious tells like instrumentation are removed, you'll probably have a decent guess at which one but you won't be able to fully articulate why unless you're a music theorist. While this was a huge step forward, not being able to look more than ten words into the past was a massive handicap. I don't know what they're doing, but modern GPT iterations seem to be capable of far more than this. DaVinci-001, the successor to GPT-3, was capable of answering one of my exam questions that required it to generate a couple of paragraphs of coherent text accurately converting a story problem into a linear programming problem. The story problem contained a ton of unfamiliar language that wouldn't have been in any CS textbooks in the training corpus, but despite this it would have scored a solid 13/15 and been in the top half of the class. I haven't yet tried DaVinci-003 (the newest version) on one of my exams but I suspect it would do pretty well. It's still a very very long way from "true AI", and it's still just gradient descent on a neural network. But if you asked someone in AI a decade ago whether gradient descent on a neural network would be able to take you to something like ChatGPT, they'd laugh in your face. When GPT-2 came out, everyone was really surprised that it was able to handle things like arithmetic problems better than contemporary "purpose-trained" AIs. The fact that modern AI is so capable despite just being gradient descent is legitimately surprising, not just from an outside view but also to people in the field, and the question of how much further it can go is a really interesting one - not just from the viewpoint of building stuff but also for what it says about how our own intelligence works.
|
# ? Jan 27, 2023 17:45 |
|
hobbesmaster posted:Chat GPT feels a lot like the ”Chinese room” thought experiment. It’s handing you back what kinda looks right without any real understanding. Yeah I always thought the Chinese Room was a copout. As if having the translation book in our actual heads is what makes us real for some reason.
|
# ? Jan 27, 2023 17:45 |
|
Rocko Bonaparte posted:Yeah I always thought the Chinese Room was a copout. As if having the translation book in our actual heads is what makes us real for some reason. ChatGPT keeps making me think of the beginning of Blindsight
|
# ? Jan 27, 2023 17:52 |
|
Electric Wrigglies posted:you talking about humans or computers? Because people driving into bridges with multiple signs saying you have to be this short to go under happens every single day.
|
# ? Jan 27, 2023 17:52 |
It's the same thing as that 'p-zombie' bs which just reeks of 'we humans are ~special~' idealism and solipsism.
|
|
# ? Jan 27, 2023 17:54 |
pumpinglemma posted:I don't know what they're doing, but modern GPT iterations seem to be capable of far more than this. GPT-3’s primary improvement came from it being more than 10 times larger than GPT-2. Model architecture has not changed significantly since “GPT-1”, in fact - OpenAI “just” keeps getting better at training and fine-tuning the same model, and keeps getting more money to spend on GPUs.
|
|
# ? Jan 27, 2023 17:56 |
|
Heck Yes! Loam! posted:I was thinking about the holographic doctor. captain america: i understand this reference
|
# ? Jan 27, 2023 17:57 |
|
Motronic posted:Are you seriously mounting a defense for beta testing really lovely machine learning on public highways here? eh, it is a bit of a trolley problem because development of automation in driving could well reduce harm to people from driving (not just crashes but energy efficiency, time saving, etc) over the medium to long term. Cruise control is still killing people every day to this day even though it started out as a lovely PID loop and now they are quite refined. A lot of attacks on automation / AI / machine learning consist of comparing the worst result of the technological solution with the best human outcome. e.g. Supv: Hey joe, how about using the expert system to run the plant in stable operation? Joe: What? I can beat that stupid thing every day, it can't even handle fault x without intervention Supv: ok, what about last shift where feeder two was underfed for seven hours. Joe: We are busy and tired, it's loving bullshit that you expect us to watch this thing all day long, you loving sit there for hours monitoring everything and not miss something. Supv: that's what the expert system is for, to tirelessly watch and optim... Joe: You're a gently caress-knuckle. I can beat that stupid thing every day.
|
# ? Jan 27, 2023 18:01 |
Like, ffs, my brain just put that solipsism in there organically and I had to stop and think about how that actually worked in the context. My brain could pull that information together because it had the training for it. Gpt models are far from true sentience, but it does have some intelligence.
|
|
# ? Jan 27, 2023 18:17 |
|
If the law bot is allowed by the courts, how long until Louisiana or some other regressive poo poo hole decides that actually, the constitutional right to an attorney can be satisfied by a contract with DoNotPay.com, and shuts down all their public defender offices? Then poors will get the legal equivalent of:
|
# ? Jan 27, 2023 18:18 |
|
|
# ? May 18, 2024 02:51 |
|
Electric Wrigglies posted:eh, it is a bit of a trolley problem because development of automation in driving could well reduce harm to people from driving (not just crashes but energy efficiency, time saving, etc) over the medium to long term. Cruise control is still killing people every day to this day even though it started out as a lovely PID loop and now they are quite refined. The autonomous car and the "AI lawyer" have a similar problem in that they can't handle edge cases in a world where everything is almost an edge case. "Ideal road conditions" are never a real thing and as humans our brains have adopted on the road to filter the bombardment of information and AI cars can't even seem to do that. Take on top of that the number of close calls we don't hear about or that are reported to Tesla because of human intervention and we have a VERY incomplete picture of what is actually going on with the automated AI. Yah sure fine the poential benefits are there IF they can get the technology to work in the not best case scenario. Essentially, what I am saying is you are a bit clouded by optimism here. You've stated the end goal and the process but we also keep seeing the numerous roadblock and technical challenges that they can't get around right now. It's not a human's are superior thing, its a they haven't proven the tech end thing. Also, we need less cars. Somewhat on topic, I see Captcha's are focusing on sunflowers right now and I desperately want to know why they gently caress up AIs so much.
|
# ? Jan 27, 2023 18:21 |