Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

StratGoatCom posted:

hahaha, no. Commercialization is the thing; it had massively slid too much toward transformation before; while it wasn't with AI in mind, the sort of precedent it sets is exactly what is needful in dealing with this tech.
Art is done by artists. Artists, like most people, don't like to starve or freeze to death. This fact has been a great influence the history of art, from the dawn of humanity to the present day. All professional artists have always commercialized their work, because being a professional artist means one relies on their artistic work for support (you can't eat or live in art).

As a specific example, a tremendous amount of the art we have from the Renaissance are portraits of rich people. You'd think that we'd have much more art concerning peasants, as they were the vast majority of the population at the time. I guess the artists must have had some kind of ulterior motive. I sure hope it wasn't anything commercial, because the artists' styles seemed to be influenced a lot by each other!

Adbot
ADBOT LOVES YOU

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Gentleman Baller posted:

If you have any evidence or legal analysis that points to this I would love to read it.
Who needs that when you can just assert things and reality bends to make them true?

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Tei posted:

That don't exists.
If my style is painting red big noses, you get that information from somewhere.

Sure, you don't have to store a copy of the original artwork, you learned from it, but is still a machine that need my work to create yours, using my style.
Styles aren't copywritable.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

reignonyourparade posted:

Copying styles is legal.
This very basic fact apparently needs to be pointed out every page in this trash fire of a thread.

Tei posted:

That would be a complicate way to copy my style, but is still copying the style. Ofuscating something might confuse robots and naive people, but it unimpress judges.
I replied to you pointing out what reignonyourparade said out on the last page. Read the thread and quit giving the rest of us brain damage with your dumb uninformed posts.

cat botherer fucked around with this message at 00:09 on May 29, 2023

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

gurragadon posted:

This is from earlier in the week and I missed it, it's really interesting and what excites me the most about machine learning and AI research. Researchers are finding better ways to break down processes in our brains and replicating them in machines and a bunch of these systems together would be needed for any kind of AI. All of the similarities between neural nets and brains that are emerging really make me think that it's the right direction to be heading in. It's both very impressive that humans might finally be able to model our own intelligence, but humbling to know that there is nothing special about being a human that can't be replicated.

I'm also interested in how this could be applied to human brains. If and when we develop more complex or accurate models it would be helpful to see where those systems end up breaking down and that might help us find similar ways that diseases or disorders that target the brain work.
I'd be cautious about putting too much biological plausibility to neural networks. Each "neuron" is just an scalar activation function on a sum of incoming connections ("synapses") from the previous layer. A biological neuron, in contrast, is an incredibly complex machine in itself.

Most all existing neural networks are also trained by variants of gradient descent, which basically just walks downhill to minimize an objective function. This is slow and takes billions of times more energy than neurons.

This isn't to say there hasn't been a lot of fruitful work. Deep learning and brains probably take advantage of similar concepts of universality, criticality, and near-chaos with stuff like the renormalization group and other things from statistical physics. However, these concepts are general and underlie basically all complex systems. I'm skeptical that a strong(er) AI would look anything particularly like a brain. Brains and neurons evolved from specific pressures over time, and have very different constraints from any non-biological system.

If you really want to get biologically-inspired, a fruitful and dystopian direction might to just use actual neurons. There's been work with pea-size clumps called organoids that can spontaneously form brain waves, and show a lot of promise (or horror, depending on your perspective).

https://www.sciencenews.org/article/clumps-nerve-cells-lab-spontaneously-formed-brain-waves
https://www.frontiersin.org/journals/science/articles/10.3389/fsci.2023.1017235

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

A big flaming stink posted:

I appreciate ai art supporters doing a better job of showing how much it sucks than its detractors ever could
Regardless, I’m just stoked that AI can give me a Hieronymous Bosch extended universe.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

GlyphGryph posted:

Finding hard numbers is difficult, but I'm pretty damned confident. Live music was absolutely everywhere in the 1800s and early 1900s.
It certainly was, but how many people made that their living as opposed to a form of entertainment when tv, radio, and computers didn’t exist?

IMO, people are way too worried about “AI” taking creative jobs. It’s terrible at that stuff. Maybe some AI generated music will get used more for HR training videos or whatever, but you can already license human music for those tasks for a few bucks. It certainly is no threat to good music.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

MixMasterMalaria posted:

Live music performance would likely have been more viable as a job prior to the advent of recorded music and the subsequent increased sophistication / taste standard codification of produced recordings distributed via mass media.
Of course, but at the same time things were much more localized, and with much more limited production such that the good majority of people did agriculture. People back then had more leisure time than now, but much less productive capacity to support professional artists. This, given the leisure time, there were a lot of musicians that played for their family, friends, and community, but few necessarily itinerant professional ones.

cat botherer fucked around with this message at 02:08 on May 31, 2023

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.
Watson was about the last time IBM tried to do anything innovative before they just gave up and resigned themselves to their moribund but still profitable mainframe market.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Tree Reformat posted:

Yeah, I suppose its worth clarifying that the Chinese Room as an obfuscated backdoor argument for dualism is complete nonsense to me (as my own beliefs on the subject are hard deterministic materialism). I was more saying its been proven as an effective criticism about the difficulty of defining and testing for consciousness/sapience in others.

Theory of Mind breaks down when it really is an open question when the "other person" may or may not be a for-real p-zombie. Artificial Intelligence research may make solipsists of us all.
Dualism is bullshit, but materialism IMO is the wrong way of looking at it. Idealism is a much simpler assumption to understand conciousness/the "hard problem" of consciousness. It also goes along with the fairly popular philosophical position of panpsychism - which suggests everything has some kind of vague "consciousness" which can produce the subjective experience of consciousness in humans. When we decide that a rock has no conciousness, humans have consciousness, and dogs have consciousness but maybe less so, the line is very arbitrary and fuzzy. The observables of all of these systems are explainable objectively by a purely materialist perspective, but the fundamentally subjective experience of conciousness is pretty much the one thing that isn't explainable. I know I'm self-aware, even though you have no way of verifying that I'm not a P-zombie - since all of my behavior comes from the physical action in my brain, body, and environment.

Yeah I'm a Marxist and all of that, but it's not really much of a contradiction. Humans are physical systems and all of our externally-observable aspects can be explained by materialism - so that's probably a simpler and more parsimonious perspective if you're interested in economic matters.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Tei posted:

And at the same time, is ridiculous to believe you can't tell a person in this state can't be separate from a person with conscience. Conscience produce answers that would be different than "automatic reponses". The beavior of a person on this state would be immediatelly obvious weird.
That's a very shallow understanding. Sufficiently complex AI or other physical systems (e.g., a human brain) can produce equally convincing answers. Every externally-observable behavior you can think of is 100% explainable through physics, without the need to invoke any idea of "consciousness." You're debating something that was answered by science long ago.

In fact, you're misunderstanding the entire concept of P-zombies. The whole idea is that they are non-conscious entities that are otherwise indistinguishable from conscious ones.

cat botherer fucked around with this message at 21:20 on Jul 12, 2023

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Tei posted:

But thats absurd and imposible, the idea is wrong. Humans would tell the difference.
You’re just asserting this with no evidence. Human behavior is explainable through the physics of our brains.

Doctor Malaver posted:

We will see consciousness when the machine stops responding as instructed. Some engineer will write a prompt and there will be no answer. They'll look for problems in network, code, etc and there will be none. Just silence from the machine, or an unrelated response, one that doesn't even attempt to fulfill the prompt .
Oh man, mysterious unexplainable problems is nothing new in AI/ml/anything with computers. Sufficiently complex systems are impossible for humans to reason about. That fact is orthogonal to consciousness.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Doctor Malaver posted:

I can't tell if you're being ironical or what, but I don't see why refusing to obey orders wouldn't be a sign of consciousness in a system whose purpose is to obey orders. Of course, once you rule out technical issues.
You can’t rule out technical issues or an insufficient understanding of the physical dynamics. You’re clearly not a programmer.

Basically, you are saying that if you can’t sufficiently predict the behavior of a system, the unexplainable behavior must be due to “consciousness,” as opposed to the simpler assumption that you don’t fully understand the thing. This is basically the same as the fallacious appeal to the “god of the gaps” as a proof of the existence of divinities.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Doctor Malaver posted:

The difference is that the child lived its experiences and AI had the experiences fed to it. But AI builds on top of them -- I assume it adds the prompts it is given and the feedback from its handlers to its calculations. They become sustained memory.
Humans already come with a tremendous amount of built-in software in our brains. Everything from social instincts to proprioception. It’s much more elaborate than any ML software.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

SaTaMaS posted:

The disruption isn't just people getting laid off due to AI, it's also jobs going unfilled due to being unable to find people with the necessary AI skills, which seems to be the bigger problem at the moment.
I’m an unemployed data scientist, and that’s news to me. The data scientist job market is in the toilet worse than the rest of the tech industry, despite it being the “AI” profession.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Tei posted:

My impresion is that the roles AI is going to need are:
- Math people, into statistic and the type of statistics popular modern AI uses
- System administrators that can goble togueter the multiple pieces that make a modern AI system breathe
- Continuous Integration / Developers for everything else. Because in the end, even ChatGPT need somebody to make a website

Programmers are easy to come by. CI people maybe harder?, system administrators are also plenty, but maybe already taken? AI-Math people is the people harder to find.

"Prompt engineer" or that stuff, I think it would be extra rare?, we are not there yet.
Math people are a dime a dozen, unfortunately. Most applied ML doesn't really touch that advanced statistics. Businesses generally don't care about statistical soundness in my experience (dumb, but statistics just aren't buzzy enough for managers). CI/CD people are doing relatively well, but so are programmers (in comparison to data scientists). Software engineering is actually harder that classic script-monkey data science, which is why a lot of the demand is moving to data/ML engineering type positions. Most businesses want somebody who understands the ML stuff enough but can actually integrate it into production.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.
Everyone is rushing to get their customer-service chatbots out. In a vacuum, that could be worthwhile compared to paying call center employees, but I think the shine will wear off quick. They don't work particularly better than previous generations of chatbots for that purpose, and customers are extremely hostile to it.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Pvt. Parts posted:

Ehhh I dunno, part of the inherent value/appeal of ChatGPT and similar models is that they are very accessible and require little to no expertise to get quite a bit of value out of. You just describe in plain language what you want it to do or fetch and it does it for you. With some tweaking they can easily become the "search engines" of the future. Microsoft is certainly betting on that angle early with Copilot.
It's too expensive to run to ever be profitable as a search engine.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Freakazoid_ posted:

https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf

Automation, including AI, has been chipping away at middle class jobs.

The notion that technology creates more jobs than it replaces is misdirection. The jobs we're getting out of technology are no longer good paying middle class jobs. We're all being dog piled into the lower class and being told to be thankful we even have a job.
If it wasn’t “AI,” there would be some other excuse du jour to justify the continued chipping away the well-being of working class people in the increasingly doomed quest to slow the falling rate of profit on capital investment.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.
Libertarian 2000s D&D is back again baby! Awoo! (wolf howl)

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

fez_machine posted:

I was thinking France and Korea

Any way, A.I returns to the source with a neural network being trained on the sights and sounds experience by a baby
Much like a baby, that’s loving stupid.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.
Damnit Frontiers, stop giving open access journals a bad name!

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.
I think it will be quite some time before AI generated videos are convincing. For the time being, I don’t think that it would be broadly more useful than the animation equivalent of tweening. That isn’t nothing, or course, but it doesn’t mean that all video is untrustworthy and everything is lost.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Quixzlizx posted:

Is it just me, or is this less of an "AI grifting story" and more of a "grifters who happened to use AI" story? Like, maybe it lowered the amount of effort required, but all of the grifting elements could've easily been done before ChatGPT existed. They would've had to do a GIS/Pinterest search for the picture instead of entering an AI prompt.
Yeah, all the images and everything were obviously not photos to anyone with half a brain. I don’t thing anyone showed up expecting it to look like that - they just naturally expected it wouldn’t be so out-of-this-world grim and half-assed.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Staluigi posted:

the humans in the matrix weren't there as a power source, their input was the only means by which ai could harvest original nonhallucinatory data
The Wachowski’s original script actually had the humans being used for computation, but the studio thought that was too complicated for viewers to understand, so they changed it to the dumbshit power plant thing that makes no sense.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

SaTaMaS posted:

Cool so you have no idea how LLMs or transformers work
"Context" exists on a spectrum. Transformer models still operate with a relatively crude (but still pretty useful) attention mechanism on fixed-length context windows. There's no ability to transfer deep semantic information - e.g. to generalize information between sources and develop internal hypotheses. As an example, look how bad they are at simple arithmetic.

Current models fall especially short of human contextual understanding when you consider the superhuman amounts of information they are trained on. Humans are incredibly efficient at learning. Fundamentally, all current models boil down to advanced nearest-neighbors. They can learn embedding to make that far more effective, but they cannot extrapolate outside of space the training data occupies.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

SaTaMaS posted:

The reason LLMs are bad at arithmetic isn't because of deep semantic information. Arithmetic involves almost so semantics, it's entirely syntax. However it involves executing precise, logically defined algorithmic operations, while LLMs are designed to predict the next word in a sequence based on learned probabilities.
Effective algorithms to actually do arithmetic put in a layer of semantics. Separately from that though, a human can learn arithmetical algorithms from a textbook, but LLMs cannot.

quote:

Sure, but most people can't or don't extrapolate beyond "common sense" either.
"Common sense" extrapolations are precisely what makes humans so much better at reasoning than LLMs. Common sense is an amazing thing, it just seems mundane because of how common it is.

Adbot
ADBOT LOVES YOU

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

GlyphGryph posted:

For what its worth, I'm pretty sure robotics have been part of the openAI core mission from their founding. The company started with four explicitly stated goals and goal 2 is "build a robot". Goal 3 was natural language parsing, the one people commonly associate with the company, but it the robot one was actually first!
Yeah but they really haven’t done poo poo in the robotics space.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply