Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Reveilled
Apr 19, 2007

Take up your rifles

Boris Galerkin posted:

Does anyone know/have information on whether or not “AI” can be used in linguistics, particularly with deciphering untranslated scripts like Linear A?

In my mind, languages have rules, right? And we have textual examples of languages that we haven’t translated, right? So train an AI on languages and let it pick up on the rules and tell us finally what they say?

I think the major problem with using an AI to decipher Linear A is that the entirety of all Linear A text ever uncovered would fit inside one very long twitter thread (it's something like 7400 characters). That's spread across lots of seperate inscriptions, many of which we can only read like one or two words on each line, and many of which contain words that appear exactly once in the entire corpus. Some of the longest texts we have appear to just be lists, meaning they're essentially only nouns, names and numbers, with none of the rest of the language's connective tissue to determine what they're talking about. The numbers appear often enough that they've been deciphered, but if the word QA-QA-RU is on a list and literally nowhere else there's virtually no way to tell if a QA-QA-RU is a goat, an amphora of oil, or a low-quality copper ingot.

Adbot
ADBOT LOVES YOU

Reveilled
Apr 19, 2007

Take up your rifles
I dunno if I'm just a moron but I always figured if it was possible to render a being unconscious, then by definition they must be conscious before you knock them out.

Reveilled
Apr 19, 2007

Take up your rifles

Boris Galerkin posted:

I see. It makes sense for early writing to be simple, lists, who owes who, etc. I didn’t realize that that was kinda all we had on Linear A. I don’t know much about it, I just pulled it out of my rear end as a classic example of an undeciphered script that may or may not have been related to languages we know today.

What about for stuff like the Voynich manuscript? IIRC it’s an entire book or two of undeciphered writing that may or may not have been the bored doodles of some random guy.

I would absolutely put money on someone feeding the Voynich manuscript into an AI to attempt to decipher it, in fact I'd be honestly surprised if there's not somewhere out there trying to do it right now. Guaranteed headlines in the media, that one.

I'm not really sure what it could do, though; it seems to me incredibly unlikely that the manuscript is in a language that existed in medieval times that we somehow have zero record of outside this book, which to me leaves only the ciphertext and gibberish hypotheses. If it's ciphertext, I've not heard of AI being able to do anything our own conventional codebreaking techniques couldn't muster, and if it's gibberish, nothing can decipher it.

Re: Russian, I have very little knowledge of Russian but I'd consider it very, very unlikely that there'd be no differences locally between Russian speakers. Possibly you might find that sort of phenomenon in the sense that, say, many siberian towns and cities are populated by people who moved from European Russia during and post-WWII so they have fairly standardised speech and amUscovite wouldn't sound out of place, but I'd be flabbergasted if there were no variations within European Russia except some very convenient bright line distinctions between Russian, Belarussian and Ukrainian. If nothing else, I've never met a person who lived in even a moderately sized city who couldn't tell the difference in accents between people from the rich part of town and the poor part.

Reveilled
Apr 19, 2007

Take up your rifles

reignonyourparade posted:

An important thing to note here is "art getting swept up in training the model" is ultimately completely tangential. There already are models out there that all the training data in was fully licensed. Maximalist restrictions on the copyright decisions here will still not actually put any breaks on this train.

For the other question, well, the answer is ultimately not too different to the same answer for an artist who never wanted to touch any of these digital art programs and continue working in nothing but watercolors/oils or something. Luck into getting popular enough that you still make it even with the more involved process for each work. This particular problem is not exactly a NEW problem even if the tech creating it is new.

Is it tangential though? It seems like there’s polemicists on both sides of the issue talking about some nebulous dystopian future where everyone is under the corporate thumb of either Disney or Google, and in between there’s artists who have the very current and real objection to the use of their copyrighted art in the most widespread and popular models.

Models which don’t use copyrighted data exist, but merely asserting a solution exists and actually implementing that solution are very different things. Reassuring artists that their own work won’t put them out of a job, and reassuring AI developers that there’s a way to build their models which won’t fall foul of some future regulation might go a long way to bridging the divide and divert people on both sides away from extreme positions.

Reveilled
Apr 19, 2007

Take up your rifles

reignonyourparade posted:

It is tangential in the sense that if people are going to be put out of jobs, they are going to be put out of jobs regardless of whether their own work is being used or not. The "models are using copyright art" and "artists may be put out of jobs" problems are functionally completely divorced from each other and addressing one doesn't do anything to address the other. That's what I mean when I say it's tangential.

But is the best way to deal with the fact that people are conflating the two to just assert over and over that they’re unrelated, or would it be to actually fix the one we can fix so that it’s not an issue any more?

Reveilled
Apr 19, 2007

Take up your rifles

Gentleman Baller posted:

I think my problem is that imo you're actually describing the more extreme position. Artists put out of work anyway, but only companies like Adobe and Disney will have access to the very models theoretically good enough to end their careers. Why would I want to encourage people towards that nightmare solution over one that obviously stings more, but allows more normal humans access to these theoretical future amazing art tools?

Is it the case then that these models trained on non-copyrighted content are uniformly worse than the ones trained on the copyrighted content? If that’s so it does seem to imply that the copyrighted content does provide direct commercial benefit to the models which use them, in which case it seems very reasonable to at least discuss whether these models should pay to license them.

Reveilled
Apr 19, 2007

Take up your rifles

Gentleman Baller posted:

Not uniformly, but from playing around with it when I could, Adobe's non-"fair use" AI has many things that it does very noticeably worse than openAI based models. It did a lot better than I expected though, but to be clear, this was Adobe using their massive pile of stock images. This isn't a model that you or I could train in the world you're asking for.

The thing is, once these big companies pay starving artists to create a plethora of images to plug their shortcomings, that is gone. A modest investment (to them) to decimate their ongoing costs. And their competition won't even be the widespread, openAI based models, that anyone currently could download and use, as those would now violate copyright.

Isn’t openAI a company with billionaire investors? Why could adobe pay but they can’t?

Reveilled
Apr 19, 2007

Take up your rifles

Gentleman Baller posted:

Adobe paid for and owned those images well before the new AI stuff came out as part of their stock images collection, and is a company with a market cap of 190 billion dollars. I have no idea if openAI could pay or not, but if they had to pay for it I'm sure the model wouldn't be available to people like you and me.

Fair enough.

That doesn’t mean there are no other solutions, though. Right now it seems the only options being offered are this one, or banning AI image generation (either literally or in effect through some mechanism that makes them unusable for most purposes), or just telling artists who are going to lose their jobs “yeah, you will”. And if the only option AI advocates are willing to put forward is the last one, is there any reason for artists not to line up behind larger copyright holders and do everything in their power to spite you and bring you down with them?

I mean, look at the solution the EU is proposing, is that the nightmare scenario? If so, how should we prevent that solution becoming the one adopted worldwide? Telling artists to just deal with it doesn’t seem to have brought them onside.

Reveilled
Apr 19, 2007

Take up your rifles

reignonyourparade posted:

Well, that's also the one where you've got the greatest argument going on about whether it is, in fact, an issue. Someone who doesn't think it's an issue does not, in fact, WANT to "fix" it.

If that’s so it seems even more important to focus on given that it the question that’s most likely to be legislated and litigated on on the near future!

Reveilled
Apr 19, 2007

Take up your rifles

Iunnrais posted:

The reason why I framed the question in terms of “what would it take for non-naive reasonably intelligent people to, on the whole, treat an AI as conscious” is because distinguishing a p-zombie from a conscious being is, as far as I’m aware, impossible. And I specified intelligent, non-naive, etc, because I know people will sometimes treat their roomba, heck maybe even their desk lamp as conscious sometimes, and I’m not trying to talk about that kind of anthropomorphization.

Because despite being fundamentally unable to distinguish between a p-zombie and actual consciousness, people DO treat other people as if they were conscious anyway! We have at least one category of beings outside ourselves that almost everyone accepts are also conscious: humans.

And we know that people can accept the idea of aliens or AI as having consciousness as well, because if we write a fictional character, consumers of that fiction easily accept, for example, HAL9000 as conscious. As a person.

It does seem like that coastline example. Land is easily identifiable. The sea is easily identifiable. But that boundary point keeps shifting and it’s not quite clear… and I really do believe we are approaching that boundary with our lifetimes, if not within a few years even. I don’t think this is a navel gazing question— knowing what traits people are going to require to accept what was a thing as a person is going to be extremely relevant, real soon now.

I like that “theory of mind” idea… except that how do we determine that something has theory or mind or not? Ask the right kind of questions, and ChatGPT-4, right now, can give answers that creepily feel like it might have theory of mind already. So defining “this this acts like it has a theory of mind because of *these reasons*” seems important to me.

(The last minute or so of https://youtu.be/4MGCQOAxgv4 would be an example of ChatGPT-4 acting like it might have theory of mind, but there are others, and I’ve seen little blips occasionally in my own uses as well)

I'm not much of a philosopher, but my intuitive understanding of what consciousness is, is that it's a state of continuous experience in which the entity receives sensory data from the outside world to create a model of that world in its own mind, and then acts upon that model in real time to achieve its goals. I can see how it would be possible under this definition to create a conscious machine easily (in fact, I'm pretty sure that conscious machines already exist), but I don't think that matters much; I think the robot dog Boston Dynamics made is probably conscious, but that wouldn't mean if I owned one I'd tremble at the moral implications of switching it off. The only possible wrinkle I see is that machines might not fit the "continuous" part since the nature of a computer is that it operates in discrete cycles, and I don't have an answer to that but I don't think it fundamentally changes my relationship with the robot dog--to me it appears conscious. Surely the question behind the consciousness question is "what would it take for non-naive reasonably intelligent people to, on the whole, treat an AI as a person"?

I think that's a much, much harder question to answer and only overlaps with the consciousness question without being equivalent to it. We generally consider humans to be people even if they have no consciousness at all (imagine someone assaulting a coma patient), we even seem to consider some of that personhood to linger behind in their things (if someone desecrates a dead body, I think most of us consider that an offence against the dead person, even those who believe the dead do not go to some afterlife where they exist in an immediate sense). To me personhood seems to have something to do with potential. If we imagine a person as an individual creature who possesses consciousness and sufficient intelligence to pursue complex, self-actualising goals and attain them, we grant personhood to any creature who has that property, previously had that property, or could have had that property but for some unfortunate twist of fate.

I can imagine an AGI which is given the goal of "attempt to achieve a state for yourself that most humans would consider optimal if it happened to them" (let's handwave away alignment questions and assume we've got a way to get the machine to behave in accordance with some semblance of morality). If this AGI builds a robot body for itself, buys a house, finds a spouse, adopts some children, and takes up painting as an artist, I think I'd find it impossible not to accept that AGI as a person. Now if we imagine that by a twist of fate the AGI is instead given the goal of "maximise paperclips", is it still a person? I think yes, based on the creature's potential to pursue person-like goals. Which I think leads to the conclusion that we out to treat all conscious AGIs as people. Which means we probably shouldn't build them in the first place?

Regardless I don't think of GPT or other LLMs as people, because I don't think they're conscious and they don't have sufficient intelligence to pursue complex self-actualising goals. The only goal they seem capable of pursuing is "work out what sequence of words needs to come next in this sequence to maximally satisfy the human (or testing AI) on the other end". That can produce very life-like responses, but it's still not an AGI.

Reveilled fucked around with this message at 14:26 on Jul 13, 2023

Reveilled
Apr 19, 2007

Take up your rifles
In this analogy, if “Mexico” and “America” are the things you call intelligence, what’s the thing that’s analogous to the riverbank?

Reveilled
Apr 19, 2007

Take up your rifles

The Artificial Kid posted:

Not really, I'm arguing that human beings also have things that could be called "prediction anomolies". That's almost exactly what's meant in a bayesian approach to perception and hallucination https://academic.oup.com/brain/article/142/8/2178/5538604.

This is kind of a separate question from if the AIs are truly intelligent or whatever, but is hallucination even an apposite comparison to what happens with these models, though? Referring to these events as "hallucinations" (or calling them "prediction anomalies" and then saying that's what a hallucination is) strikes me as the sort of term that's been chosen more for its PR than its accuracy. Notionally we want these LLMs to produce accurate and trustworthy information, we want them to provide genuine help that aligns with our goals. Obviously getting truthful info is the optimal outcome, but when it spits out something untrue, if you call that a "hallucination", it sounds bad on the surface but it also implies that the AI is still trying to faithfully present you with true information. It implies the AI still cares about giving you the truth, it just got confused about what the truth is--like a hallucination!

But would it be more accurate to just call it bullshitting? It's not trying to faithfully present you with true information, it's trying to present you (or its own evaluation function) with information you will evaluate as "good" regardless of its truth content. Usually the easiest shortcut to giving a "good" answer is to give the truth, but it'll happily give you lies if it thinks you'll evaluate that as better. When this first caught the public imagination the models were more resistant to correction so they'd insist to the point of insanity that [false thing] was true, but now if you pick an AI up on an error it tends to go "you're right sorry, my mistake, here's the correct answer: [massive stream of pretty sounding lies that contradict previous statement]". People who are hallucinating do the first thing but they don't tend to do the second thing, people who are bullshitting will do both.

Reveilled
Apr 19, 2007

Take up your rifles

reignonyourparade posted:

I think when the lay person hears AI they imagine literally anything in the range of "thinky computer"

Yeah, I think people way overestimate the degree to which your average person gives a poo poo. I’ve heard multiple people raise the objection that we shouldn’t call the current systems AI because your average person thinks that means a full general intelligence and that confusion is being exploited, but I can say from experience that when you talk with your average person about this their usual starting position is “thing I maybe heard about on the news once that I don’t care about and have no particular opinion on”.

At Christmas a member of my family showed us a thing she had on her phone, which was a ChatGPT bot which conversed humorously in our local dialect. She passed it around and everyone had a laugh typing to it and reading it’s responses. As we talked about AI afterward, it was clear to me at least that none of these people, many of whom were having their first actual interaction with an AI product after hearing about it only from the news, thought they were interacting with any sort of human-level intelligence. Their conception of what was going on in that family member’s phone began and ended with “the phone does a funny thing”.

Lay people don’t think AI is sentient or self-aware or generally intelligent because the term “AI” is misleading. They don’t think about AI at all.

Adbot
ADBOT LOVES YOU

Reveilled
Apr 19, 2007

Take up your rifles

mobby_6kl posted:

No that's right.

That said the generative AI stuff will make it easier and cheaper. You could GIS but you'd need to find a bunch of images that show what you want to grift, are consistent, are not recognizably an existing thing, or reverse-searchable. One could also subtly or not so subtly enhance images of the real location/product so when people do show up, it's vaguely similar to what they expected, just a bit (much) shittier.

Same with the scripts, you could write that stuff yourself or steal it somewhere of course, but you could more easily generate the specific scripts you need by asking ChatGPT.

To be honest you don’t necessarily need to pass those hurdles, local social media had identified this as a grift in the weeks leading up to it and even had the name of the organiser (who has previous form for it), but it still sold tickets. If they’d used pictures from the film and not even bothered with a script I doubt it would have made much of a difference.

I imagine that for the vast, vast majority of customers what convinced them the event was legit was that one of our local event ticketing websites was selling tickets. Whats On Glasgow don’t really do any validation of the events they list, but the people who bought these tickets probably assumed the opposite. By the time they ended up on the website it was probably after seeing the listing on WhatsOn, and so they were primed to assume it was real. And I imagine a lot of them never even cared to look up the event’s website.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply