Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Liquid Communism
Mar 9, 2004


Out here, everything hurts.




SCheeseman posted:

I don't think many want to chat about it, tensions are way too high and it's fracturing communities. Anyone who has their jobs and livelihoods threatened by it and/or see it as an affront to humanity are mostly interested in ways to crush it. Pretty understandable, the capitalist powers that be will take this technology and use it in all the ways people fear it will be.

But figuratively trashing the looms has never worked. AI is the end point of humankind's reliance on tool use, the problems we're grappling with now started when cavemen sharpened their flint to the point they could carve rock and/or skulls. The best we can do is manage it, something we've had a shaky history of doing particularly as of late thanks to a society that predominantly values accumulation of capital over human wellbeing (while those in ivory towers try in vain to equate the two). An AI ban might be technically possible and enforceable, but not when every world government want's this AI thing to happen, and given a social and political system with truly humanistic values automation wouldn't be a problem anyway.

It's the rich people. They are the baddies.

Anyone who understands the technology behind it on more than a surface level is generally right with those authors and artists because they understand that none of this AI functions without massive datasets that invariably violate the rights of authors and artists because no organization could afford to license works on that scale. Even at massively discounted rates for commercial licensing the image DBs behind them would be billions of dollars.

It is hilarious how clearly this is designed to benefit the rentier class, though. Using billions of dollars of supercompute hardware, at a cost of millions a month in power, cooling, and network capacity to avoid paying a few graphic designers $40k a year.

gurragadon posted:

It's like the industry of creative arts is going through what other manufacturing has been going through in a really fast time scale. All those people working on an assembly line are replaced by a machine and somebody to make sure it works. Now the same thing is happening to creative freelancers.

The advancements are coming really fast now though and it's going to hit white collar workers everywhere. Like I posted a bit earlier, GPT-4 can ace the bar exam and you can hook it up to other programs so it can perform accounting practices. Were quickly making most of the population's employment not worth the money. But if it goes beyond creatives, it will start hitting workers who command a pretty strong voice in the economy.

Will lawyers or doctors have a big enough voice when there turn comes? Since its so similar to what happened to assembly lines in the past in my mind, we know what NOT to do with people affected by advancements in technology. A simple example is just watching Roger and Me. Did any societies treat their redundant workers better?

GPT4 being able to pass the Bar is more a function of how the test is written than the ability of the AI. It's specifically designed to test the memory of applicants in a stressful and long-form test. A search algorithm with effectively perfect recall will do well at this.

Doing a lawyer's actual job, making analysis of law and presenting facts in light of said analysis, in a convincing and logical manner aligned to precedent is far outside of its scope. The same for doctors, being able to cross-reference WebMD really fast is no replacement for the people skills required to actually get an accurate description of symptoms from a patient, or to create a treatment plan across multiple conditions present that provides a balance between quality of life, patient comfort, and effectiveness of treatment.

Hell, GitHub's implementation to write code is going to go hilariously badly because it is trivial to poison the data sources in such a way as to make the outputs useless or try to inject exploits into them such that the script recreates them.

Liquid Communism fucked around with this message at 05:24 on Mar 27, 2023

Adbot
ADBOT LOVES YOU

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Even on factual questions if there isn't enough context or your query requires too much time it'll just make poo poo up. It's just a search engine running your cell phone's word predictor to respond to texts.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




gurragadon posted:

I do have to agree with you for the most part that doing good on a test is no indication something will be good in practice or vice versa. But I also can't completely dismiss a connection between the two because many people I've worked with were also top of their class and lack of effort in studying for a test can indicate lack of effort in their profession. The bar exam does have an analysis section and lawyering task sections though. I haven't taken the test but maybe you have? How similar are those essays and tasks to the type of daily tasks of a practicing lawyer?

I think that GPT-4 would be bad in the courtroom, but I still want to see it happen. I want to see it do bad in a lower stakes case. Currently the most realistic use of AI technology is to streamline offices and to do that tasks the person in charge doesn't want to do but doesn't want to pay for either. I don't see a final human being removed from a system using AI technology at any point in the near future. As you said, the technology is NOT as the current level where it can be trusted. But even if it was, somebody has to be responsible. Somebody has to be there to sue. And I don't know how you lay responsibility on a tool like GPT-4 and AI creators are gonna fight that to the end I think.


Remember that the Bar is not an indication of a good lawyer. It is a minimum standard to be allowed to practice law, and in the case of the 'pass' it is again starting from the massive advantage over a human of being able to look up sources with total recall. I'd imagine the worst law student on the planet could pass the exam if it were open book.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




gurragadon posted:

It does have that advantage, but I don't really think it's fair to count it against the AI. I mean the system has access to that information by design, my brain has access to legal facts that I learn by design. Im just a lot worse at that.

I'm not really familiar with law tests but I can speak to open book tests in chemistry. My Physical Chemistry class was open book, but if you had NO idea what to make of the book it wasn't very good or helpful. The worst Physical Chemistry student in my class failed. I would imagine law is of similar complexity, just in a different way. If it is not then why is lawyering so regulated? I think that even making the test open book would weed out the very worst, basically the people unwilling to even learn how to learn to be a lawyer.

That's really a problem with the Bar exam unfortunately and is there a renewed interest in changing it with the GPT-4 results? I mean AI would be really good at storing information and pulling it up, so why do we need to test for that still in the Bar? Maybe the format should change to even more analysis.

Given part of the point of the test is 'does this person remember the principals well enough to make decisions based on them under stress', the recall ability is indeed something being tested. There's a reason that candidates cram for weeks before taking the test to try and have as much possible information in memory as they can.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




gurragadon posted:

I know about cramming and operating under stress. So, is the Bar useful for determining if lawyers are good or not? Because that seems like an endorsement of the Bar Exam, which GPT-4 got in the 90th percentile on.

Edit: I mean is the ability to make decisions under stress a skill that is needed for a lawyer.

A GPT-4 implementation is incapable of experiencing stress, and again has open access to the materials being tested on, so by its very nature a test of its stress management and memory cannot have any meaningful results.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




gurragadon posted:

I was asking if you thought the bar exam had meaningful results for a human? What you see as not having meaningful results from GPT-4, I see as GPT-4 removing the obstacle of stress management from the process and performing in a superior way.

If the Bar exam was still timed but was open book would that change your opinion? The information would be available, and the difference would be GPT-4 being able to access that information faster.

It does have meaningful results for a human, although I'm sure there are better approaches, but that may be my own bias against the validity of standardized tests.

If passing the test were the purpose of the test you would have a point regarding the GPT-4 results.

It is not. The purpose of the bar exam is for a candidate to demonstrate they have the skills necessary to practice law at the standard set by the bar association, as a condition of admission to the bar and licensure to practice law. The ability to manage that stress is part of the point of the test. That GPT-4 cannot experience that stress is not an indicator of superiority, so much as a demonstration that it lacks the basic capabilities that are being tested in the first place. So far as I can tell from the article, it was also only tested on the first two portions of the bar exam, the multiple choice and essay portions, and not the significantly more important performance test where an aspiring lawyer is given a standard task such as generating a brief or memo for a case file set in a fictional state, along with a library of the laws of said fictional state.

I do not expect that by its design the GPT-4 is capable of a relatively simple task of reasoning using a dataset on which it has not been trained.

All somewhat beside the point as the GPT-4 cannot in point of fact practice law, because by definition a lawyer requires personhood, which a chat algorithm is incapable of.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




IShallRiseAgain posted:

Honestly, I think once it reaches the point an AI can replace programmers, its basically capable of replacing any job, except for jobs that strongly rely on social interaction. There might be a slightly delay for physical labor, but the tech is almost already there.

I honestly don't think it'll ever be a functional replacement for programmers. It can only interpolate from what it is trained on, and thus can't exactly find novel solutions to problems, and is incapable of any but the shallowest context. It'll speed up routine code much like predictive text messaging speeds that up, but won't replace human experience and knowledge.

I've been in IT long enough to also understand that a lot of businesses will try to sell it as a replacement for coders, then spend a poo poo ton of money hiring people to unfuck the code the AI was confidently wrong about and they implemented without sanity checks.

Doctor Malaver posted:

It would be helpful for objections to AI to be specific and free of padding. For instance, why complain about "false text" flooding the internet? How's that going to be different from now? Are we in the end days of truthful internet, free of conspiracy theories, money scams, hate speech..?

I think most of us old grumpy tech folks are well aware of the vast signal to noise ratio change in the internet brought about by social media, and understand that a tool that will allow individual bad actors to generate said noise vastly more efficiently will poison the usefulness of the internet as a platform in much the same way that junk mail, robocalls, and email spam either crippled those tools or required extensive legal and/or programmatic solutions to filter.

Half the reason conspiracy theories get so much traction is that they're spread with great volume, and most people absolutely don't have the time in the day to research and critically evaluate every piece of information they consume.

Liquid Communism fucked around with this message at 11:48 on May 8, 2023

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Owling Howl posted:

This discussion has been ongoing for like 50 years with people submitting works made by children, animals or computers to art galleries.

Ultimately it doesn't matter if you describe it as art or decoration - the impact it will have on society is the same. If an author uses AI to illustrate their book an illustrator is not getting paid to do it. Is it devoid of artistic meaning? Sure but that doesn't help the illustrator. It helps the author though...

The bigger point to me is that it's a force of stagnation. AI can't create, it can only interpolate from its training set. Meaning you're never going to get an AI Andy Warhol, or see new mediums or forms of expression evolve out of emerging art styles because without references it cannot duplicate them. Hell, the first major use of deepfake stuff outside porn was duplicating dead actors for Disney so they didn't have to recast the character they were relying on a nostalgia bump from.

As AI art makes it impossible for artists to find paying work, we lose any future works they would have made as well, and the works inspired by them. There's a reason the New Deal included funding for the arts.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




BrainDance posted:

But, how is that different from what humans do?

If you can't understand the difference between a script that puts pixels together based on what humans labeled training datasets as, then waits to see if the result provides enough of a pattern for human pattern recognition to think it's what they wanted and a human being creating representational art to try and convey their lived experience to others, I'm not sure I can help you.

AI doesn't create. It does statistical prediction based on a massive dataset to try and make a pattern the user will recognize. It no more understands what a 'dog' is than a toaster, and can only contextualize it by the files in its training dataset labeled (by humans) as 'dog'.

PBS of all mainstream sources did a pretty good segment on just how much (mostly developing world exploited workforce) human labor is actually behind making current AI tools look smart.

https://www.pbs.org/newshour/show/concerns-rise-over-treatment-of-human-workers-behind-ai-technology

Liquid Communism fucked around with this message at 14:50 on May 10, 2023

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Lemming posted:

Just saying "there's a similarity" doesn't answer it either. Human brains are extremely similar to chimp brains, but chimps also aren't really creative in the same way humans are.

You can't just claim things are vaguely similar and wave your arms and squint, the situation is a lot more complex than that.

One of the key points is looking at how much training data being fed into the different models is affecting how good or "creative" they appear to be. They've jammed everything they possibly could into it, more words and images and text than any person could consume in thousands of lifetimes, and they still haven't figured out how many legs people have. The entire reason these models are impressive is because interpolating between things you've already seen is more powerful than we thought it was, but it's still fundamentally completely reliant on the training data.

Humans are not reliant on training data, not in the same way. Despite having access to only a fraction of the data those models are, humans can generate new information in a comprehensive way that understands what they're looking at. It's just fundamentally a completely different approach. You can use words that describe the processes similarly, but it's still not the same

The key is the difference between interpolation and extrapolation.

An AI can make a guess what the next point in a pattern is based on all the other, similar patterns it has been trained on, but limited by the outer bounds of that training data. It will also be confidently wrong, as it is incapable of second-guessing its own work.

A human can take a series of data points and make inferences based on data not actually present.

Liquid Communism fucked around with this message at 03:26 on May 11, 2023

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




BoldFace posted:

I'm only familiar with interpolation and extrapolation in mathematical context involving things like numbers or geometric objects. I'm struggling to undestand how you use these terms with language models. If I ask GPT-4 to come up with a new word that doesn't exist in its training data, in what sense is this interpolation rather than extrapolation? Similarly, I can ask it to create a natural number larger than any other present in the training data (which is finite). You say that the training data imposes limits on the output of the model. I would like to know how these limits manifest in practice. Is there a simplest task an AI fails because of these limits, but a human doesn't?

If you asked it to come up with a word not in its training data, how would you vet it? It could certainly generate semi-random nonsense and tell you it's a new word, but it couldn't make like Tolkien and invent a language from first principals.

A better and more common example is troubleshooting programming code. ChatGPT is absolutely terrible at this, because it is both confidently wrong and incapable of making inferences of intent. A human coder can look at a piece of code, and the use it was meant for, and evaluate what the intent of the writer was and where they hosed up, even if the syntax is okay. This is such a basic thing that it's elementary-level problem solving, and large language models are utterly poo poo at it because all they can do is compare it to other examples of syntax they were trained on and try to vomit up something linguistically similar.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Tei posted:

1- That AI art can't be copyrighted.
2- Unfortunallly, that the way Midjourney just opt-out of copyright laws to steal their datasets is legal.
3- That using AI ART to do illegal stuff (like revenge porn) is illegal.
4- Maybe having to register your AI thing, if that feels feasible and in some territories. In the EU is mandatory to register databases that collect citizens data.
5- If the register thing ever happend, they may pass new laws, that would be somewhat unnecesary, like you AI ART can't use people faces withouth permision, or be considered offensive. But they may have to wait a bit before passing a law like this to see if possible to really curtain what a AI ART thing can do. So far it seems the industry is self-censoring, thats why violence/sex/racism is banned in Midjourney/ChatGPT.

I don't expect #2 to hold up to any court challenge. Too many companies' main profit center is rent seeking on copyright ownership, and Disney has better lawyers than the government, and the usual 'we're just a carrier' argument isn't going to fly when they're producing derivative works.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




SCheeseman posted:

The argument will probably be a fair use one, pointing to Google Books being ruled a transformative use in spite of the entire business model of that service being reliant on creating unauthorised scans of copyrighted books, serving unedited extracts made available for free, supported by advertising. Google attribute works to the authors but that didn't stop a lawsuit from rights holders, and that AI generators don't spit out explicit copies is arguably as much an extenuating circumstance.

AI generators being capable of spitting out explicit copies will be enough to prove that they're using copyrighted materials to create the backend for their for-profit service without licensing. It'll go over about as well as someone trying to start a new Netflix with nothing but DVD rips.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




roomtone posted:

There is still profit being generated from your art - the difference is, now instead of a % of the take, and related ways to support yourself, you get nothing, the tech companies take all of it including even the credit. It's could be the same with any creative format, if it's just allowed to happen. Somebody is still going to profit from human creativity, it's just going to become people who have nothing to do with the creativity.

Yep. Always fun to watch the rentier class once again engage in outright theft to then turn around and offer to sell back the work they stole at a premium.

That's the thing about every last one of these techbro 'disruption' schemes, they're all at the root an attempt at landlordism. The people running these content-generation AIs have no desire to produce art or literature. They simply want to make sure that they get a cut on the value of any that is produced.

Liquid Communism fucked around with this message at 09:13 on May 14, 2023

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Count Roland posted:

Here's a post from Slashdot which I thought was interesting:

'Copyright is a *privilege* we as a society *give* to creators to encourage creativity.
If we extend that privilege for more than a few years then it no longer encourages creativity but instead *stifles* new works based on the old.

To benefit humanity, copyright terms must be cut to five years or shorter.'

It in no way follows that it is then a social good to allow corporations, who are explicitly not people and thus cannot create, to violate copyright laws for profit while also exploiting said laws to attack creators.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




KillHour posted:

You might be speaking from a personal values perspective, but as far as the law is concerned, I'm pretty sure this is explicitly not true.

From a legal perspective it remains true, outside of personhood in some technical standings. People create works for hire which are owned by the corporation, but the corporation itself, not being a human, is incapable of authorship.

GlyphGryph posted:

Maybe in some other copyright regime this would make sense, but in the one we live in copyright exclusively belongs to the corporation creators work for and the money-men who buy them and not the creators themselves the vast majority of the time, often specifically so they can use those copyright laws to deprive the creators of the right to their work. (ZA/UM is a great recent example of this)

Like, what you're describing here - that's the standard, the foundation of our copyright system. Any attempts to "strengthen copyright laws" under the current system are almost certainly going to make that worse. It's not like anyone is proposing that corps lose the ability to hold copyright in favour of giving it to creators.

As said above, are you criticizing the existence of copyright because the creators have not destroyed capitalism yet? Creators own their work unless forced (mostly by financial coercion) to transfer that ownership either via 'work for hire' or outright IP sales. Or, as is common and the foundation of most LLM currently in use, by outright theft the creators lack the funds to successfully oppose in court because the rentier class has successfully won at capitalism and and amassed resources more surely than ever the old nobility did.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Count Roland posted:

I believe LLMs are poor at logic. Dealing with facts requires the AI to state things are true or false. Such statements can be logically modified ie if x is true then y. A model that is guessing the next symbols in a phrase will sometimes pull this off but can't itself be reliable. The AI needs to do logical operations. Which I assume is possible, given how logic-based computing is.

They're not so much poor at it as incapable of it. There's no logical processing going on, just a query of what is most likely to be the right response based on what is related in their training data.

It's the same thing for fiction writing, it's honestly more work to make anything beyond a very short LLM response readable than writing your own rough draft because it is incapable of context and confidently wrong often enough that you're going to lose a ton of time editing for continuity, factuality, and catching plagiarism.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




GlyphGryph posted:

They don't reference things, though? Once the AI is created and trained it doesn't have any of the original artwork to reference, and conceptually it doesn't make any sense anyhow. That's not how these things work.

The AI's entire 'memory' consists of its training set. Hence why you cannot remove something from said training set without retraining the AI, or it will continue to use what has been indexed.

It is incapable of creativity. It is simply pulling elements from training data that is tagged similarly to the prompt given.

This is a large part of why the EU is looking at it sideways, as present designs cannot comply with the GDPR both in proving they do not contain PII, or obeying right to be forgotten.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Clarste posted:

The idea is to define the program as something that cannot "learn" and can only "copy" so therefore anything in its training set is copying by definition. Like tracing. A computer cannot have a style, it can only trace things.

Yep. Even the 'draw a thing in Bob Ross' style' prompt is a dodge, because the algorithm has no idea what Bob Ross' style is. It knows there were files in its training set that were human-tagged as being produced by or similar to Bob Ross, and will now iterate on parts of them to generate an image that the human user will then decide is or is not what they wanted.

The only thought of style here is in the prompt giver and those creating the metadata in the training set.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




SubG posted:

And if you want to be fully confused about the issue, look up the images involved in Leibovitz v. Paramount, which went the other way.

Not much confusing there, Lebiovitz v Paramount was ruled as parody for humor value and not in competition with the original photographer's work.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




https://twitter.com/robertkneschke/status/1662034837618786304

Gee, I wonder why they'd be concerned about liability if forced to disclose their training data. :v:

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




StratGoatCom posted:

Oh please, you always have to be clear in chain of title you dork. The AI wierdos made this problem, not us. And yes, we will watermark, yes we will do things to choke your models because you did not ask permission for usage and theft invites retaliation.
https://twitter.com/ceeoreo_/status/1660674844302749698

Speaking of this, a lawyer went and did just that.

https://twitter.com/questauthority/status/1662273759259295746

Peter LoDuca, arguing a case before the Southern District of New York entered an affidavit generated by ChatGPT that outright made up case citations to cases that never existed. When pressed by the Court, he then generated those decisions with ChatGPT and submitted them to the court, complete with a forged notary stamp.

To say the judge is a bit upset would be the understatement of the year.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




BrainDance posted:

I have no idea how paying a person for their stuff showing up in an output would work or make sense. It's not pulling from the "number I made to represent this artwork in the model", it's pulling from "averages but not really averages I got from denoising into a whole bunch of images that seemed related in a way to this token" so, what, you pay every artist who ever drew or photographed a cat a billionth of a penny every time someone uses AI to generate anything that may have been influenced by its understanding of a cat, which actually includes an incredibly amount more (which we probably cant tell) than just the prompt "cat?"

It doesn't make sense, which is why that isn't what anyone is angling for, but rather requiring licensing the works used to train the model in the first place.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Gee, almost like there's a reason ethics of technology has a lot to say about why it's a net negative for society to create systems that rely on mass appropriation of others' work for commercial purposes.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




If you let the AI company set your rates, sure.

They need your content, you don't necessarily need to sell to them, and should be pricing in the damage their business will do to your future income if you choose to do so.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




I'm not against banning its application as a content generation machine, but given that we haven't managed to stop running illegal hotels in residential areas (AirB&B et al) or scab taxi services in places where taxis are regulated despite there being existing laws against them that could be enforced, I'm not sure if regulation can get there fast enough.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




KwegiboHB posted:

There we go, I knew you were in there somewhere.

How are you going to ban

|| ε - εθ (xt, t) ||2

?

The mean squared error between the actual noise at time t and the predicted noise at time t given some image.
This funny equation is the beating heart of what's now called AI Image Gen.
If you don't understand it, grab two mirrors and play with them until you do. (Look at this bougie gently caress who can afford TWO WHOLE MIRRORS in this economy)

I'm not.

One doesn't need to ban the underlying technology to make a given use of it verboten.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Owling Howl posted:



Just based on that I think i the copyright debate is largely a distraction because we're fundamentally going to end up in the same place.

I'd consider that a pretty heavy assumption. We can look to how often corporate tech builds on open source to vast profit, then takes their solutions closed source to protect that profit as an example of why. Along with the last ten plus years of VC influenced business operating on the base concept that since you can't jail a corporation, breaking the law is fine as long as the fines for doing so are less than the profit you make in the meantime. Some of your patsies may see fraud charges but the investors rake in the cash and that's what matters.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




The artist is playing every instrument at an EDM concert, as they're generally solo sets. What the hell are you on about?

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




BrainDance posted:

At the vast majority of them no the DJ is not and at least in the early and mid 2000s (and probably before? And probably now? Those were my partying years) the producers were seen as musicians but a large, large chunk of the community absolutely did not see what the DJs were doing as any kind of artistry or playing anything. It was actually a very similar situation to this.

Just because an audience who has never tried to create an art form lacks understanding of the technical skill it requires does not mean that skill is not being used.

Connecting right back to AI gen art, where users want art made to spec, without either investing themselves in gaining the skills to create it or resources to hire someone who has done so.

(And as someone's inevitably going to Kramer on with it: generating an AI prompt is in no way similar, if anything it's less effort than creating a proper spec for commissioned work.)

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Jaxyon posted:

It would also be more viable if the recording industry hadn't more or less instantly decided that it was entitled to more than half of an artists income by virtue of owning a microphone and a recording device.

Almost like the rentier class are parasites making their profit off of others' work because they lack the drive to learn a skill!

Much like AI 'artists'.

Liquid Communism fucked around with this message at 14:11 on May 31, 2023

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Not particularly, because as you say, I don't recognize writing Google Image Search queries aimed at others' material as a form of artistry, even if you abstract it out a layer and use an overgrown predictive texting algorithm as a middleman.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




karthun posted:

Why not? Is any expression of text a form of artistic expression? if so why not this one?

Because ideas are a dime a dozen, and the least important part of artistic expression. Text absolutely can be a form of artistic expression, when it is used as an artistic medium, but holding up a search query for 'elf paladin character design blonde with sword' as equivalent to even someone without developed art skills' quick sketch is flatly absurd.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




gurragadon posted:

I think a lot of this is just that we are in the nascent stages of this kind of technology. ChatGPT was the first impressive LLM that I've ever seen, and even though I know it makes mistakes, I still find it incredibly impressive. I don't know what the foreseeable future is and how fast breakthroughs will happen. Is it fusion where always almost there or is it really going to keep improving really quickly now?

Deeply unlikely.

ChatGPT isn't really that much of a breakthrough algorithm-wise, they just threw a massive pile of gear at it. Getting it to behave as well as it does now takes millions of dollars of hardware, and ongoing power/cooling investments to run it. MS talked about what they threw behind OpenAI earlier this year: https://www.networkworld.com/article/3691289/microsoft-details-its-chatgpt-hardware-investments.html

$1bn in 2019 to get here, with another $10bn this year.

The hardware involved is reported as Azure supercompute, running on tens of thousands of NVIDIA A100 GPUs quoted at $10k each.


That's not particularly scalable unless you're a megacorporation.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




KillHour posted:

You mean the military wrote a report on theoretical risks given a scenario - something they do constantly for everything you could imagine - and it was reported as something that actually happened? I'm shocked. This is my shocked face.

Edit: I'm glad my bullshit detector at least worked well enough to see the obvious holes in the story.

Yeah, it's a fundamental thing any time the measure you use to gauge success isn't the actual thing you care about. See also: capitalism and using the capacity to acquire wealth as a proxy for total economic contribution.

It turns out that algorithms for finding local minima are really good at abusing those situations.

It's also the plot from Terminator, where Skynet starts a global thermonuclear war because the easiest way to stop locals from turning it off was to provoke a mutually assured destruction nuclear counterstrike.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




SCheeseman posted:

https://www.youtube.com/watch?v=_MrGNlXRi9M
The questions, or rather the answers start getting existential near the end, Keller seems to draw a line between human intelligence and the work being done in AI which was surprising.

Not remotely surprising to anyone with the slightest idea of the difference between a jumped up autocorrect and an actual GPAI.

Corvids are orders of magnitude beyond current 'AI', much less actual human level thought from a machine.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Doctor Malaver posted:

I'm trying to be impressed but these stats don't mean anything to me. :(

For context, you can always go with Bo being good enough to be playing at the professional level in 2 of the 3 primary US pro sports (baseball and football) at the same time.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




DeeplyConcerned posted:

I think people will tend to treat these things as conscious when they start possessing enough qualities that we associate with consciousness. Particularly facial expressions and the expression of emotion.

If I tell my butler bot to fetch me a brandy and it drops the glass, I might tear into the butler bot call it a worthless piece of poo poo that should be sold for scrap. If the butler bot then frowns and slumps its shoulders, I may apologize and say I didn't mean it. I may know that the butler bot doesn't possess consciousness, but I would still feel bad treating a robot the displayed human characteristics like poo poo.

That's just the very human tendency to anthropomorphize literally everything. It's one of the psychological factors AI developers now lean in on to try and convince less tech savvy users that what they're interacting with is an AGI rather than a predictive text algorithm incapable of contextual memory, much less emotional states.

Rappaport posted:

Possessing a credible theory of mind, for me.

I've seen a dog look at me via a mirror when I spoke to him, which to me signaled that a) he knew he was being spoken to b) he understood what a mirror was, and that I understood it too.

I'm not sure how to expand this to beings without a similar physical body. Obviously the famous HAL-9000 was sentient and had a theory of mind, but they went insane because of it. How does a chatbot make me understand that they have a theory of mind, instead of a Bayesian random walk that just tells me things the algorithms assume I want to hear? I don't have an answer to that.


As far as what I'd find to be a convincing proof of AGI? An AI which can learn a task which is outside of its training data without being fed specific instruction, and then apply what it learned to a different but conceptually related task later unprompted would be a minimal start.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Tei posted:

I am way better programmer than ChatGPT. Sometimes I get frustrated how obtuse mediocre is ChatGPT.

At the same time when I need something mediocre like "give me a curl request sending photo1 and photo2 to /face_recognition", I get the resulting code in about 3 seconds. Way faster than I would even being.

But theres something worse, much worse.

I could say something horribly poorly, with spelling horrors, using the wrong word, mistaking a few parts.. and I still get a good answer. ChatGPT is better at receiving broken information than me. ChatGPT would understand a customer much better than I can. In that sense ChatGPT is much better programmer than I can possibly be. My problem is when I receive conflicting information, I get super frustrated, angry, and don't know what to do. And people produce naturally this type of feedback all the time "We want all buttons in our webpage to be permanently disabled when the user double click". <-me-> "Also the logout button?" . "No, no that one". "And the search button?". "no that on exclused". "And the filter buttons?". "no these buttons not".
In user speak all is not all. First is not first. Left is not left. Right is not right. Yes is not yes. No is not not. They start talking about what they want, jump to how they want it, describe a broken mechanic has desirable. Want things to work.
All of this frustrate me to no end. But ChatGPT get this poo poo, and produce a correct code, in 3 seconds.

Maybe I am too much like a robot. And ChatGPT too much like a empatic human being.

If they ever get a ChatGPT to write good programs (not just "Code"), the hilarious thing is that will be better at decyphering users broken feedback, than many of us programmers.

That's only obvious. A generative AI can never be better than the average of its inputs, and there's a ton of lovely inputs.

Adbot
ADBOT LOVES YOU

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Lord Of Texas posted:

This is not remotely true, and it's such a blatantly false/low-effort take that I don't feel it's even worth refuting. Just google any number of LLM benchmark papers on semantic scholar.

My dude, you're trusting an AI-based research tool to provide you with accurate summaries of scientific literature, when the technology is well known for outright making stuff up that is trivially wrong?

Continually yelling 'nuh uh' is not a refutation.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply