|
StratGoatCom posted:hahaha, no. Commercialization is the thing; it had massively slid too much toward transformation before; while it wasn't with AI in mind, the sort of precedent it sets is exactly what is needful in dealing with this tech. As a specific example, a tremendous amount of the art we have from the Renaissance are portraits of rich people. You'd think that we'd have much more art concerning peasants, as they were the vast majority of the population at the time. I guess the artists must have had some kind of ulterior motive. I sure hope it wasn't anything commercial, because the artists' styles seemed to be influenced a lot by each other!
|
# ¿ May 27, 2023 02:23 |
|
|
# ¿ May 20, 2024 19:06 |
|
Gentleman Baller posted:If you have any evidence or legal analysis that points to this I would love to read it.
|
# ¿ May 27, 2023 15:39 |
|
Tei posted:That don't exists.
|
# ¿ May 27, 2023 18:10 |
|
reignonyourparade posted:Copying styles is legal. Tei posted:That would be a complicate way to copy my style, but is still copying the style. Ofuscating something might confuse robots and naive people, but it unimpress judges. cat botherer fucked around with this message at 00:09 on May 29, 2023 |
# ¿ May 29, 2023 00:03 |
|
gurragadon posted:This is from earlier in the week and I missed it, it's really interesting and what excites me the most about machine learning and AI research. Researchers are finding better ways to break down processes in our brains and replicating them in machines and a bunch of these systems together would be needed for any kind of AI. All of the similarities between neural nets and brains that are emerging really make me think that it's the right direction to be heading in. It's both very impressive that humans might finally be able to model our own intelligence, but humbling to know that there is nothing special about being a human that can't be replicated. Most all existing neural networks are also trained by variants of gradient descent, which basically just walks downhill to minimize an objective function. This is slow and takes billions of times more energy than neurons. This isn't to say there hasn't been a lot of fruitful work. Deep learning and brains probably take advantage of similar concepts of universality, criticality, and near-chaos with stuff like the renormalization group and other things from statistical physics. However, these concepts are general and underlie basically all complex systems. I'm skeptical that a strong(er) AI would look anything particularly like a brain. Brains and neurons evolved from specific pressures over time, and have very different constraints from any non-biological system. If you really want to get biologically-inspired, a fruitful and dystopian direction might to just use actual neurons. There's been work with pea-size clumps called organoids that can spontaneously form brain waves, and show a lot of promise (or horror, depending on your perspective). https://www.sciencenews.org/article/clumps-nerve-cells-lab-spontaneously-formed-brain-waves https://www.frontiersin.org/journals/science/articles/10.3389/fsci.2023.1017235
|
# ¿ May 30, 2023 17:29 |
|
A big flaming stink posted:I appreciate ai art supporters doing a better job of showing how much it sucks than its detractors ever could
|
# ¿ May 31, 2023 00:24 |
|
GlyphGryph posted:Finding hard numbers is difficult, but I'm pretty damned confident. Live music was absolutely everywhere in the 1800s and early 1900s. IMO, people are way too worried about “AI” taking creative jobs. It’s terrible at that stuff. Maybe some AI generated music will get used more for HR training videos or whatever, but you can already license human music for those tasks for a few bucks. It certainly is no threat to good music.
|
# ¿ May 31, 2023 00:39 |
|
MixMasterMalaria posted:Live music performance would likely have been more viable as a job prior to the advent of recorded music and the subsequent increased sophistication / taste standard codification of produced recordings distributed via mass media. cat botherer fucked around with this message at 02:08 on May 31, 2023 |
# ¿ May 31, 2023 02:06 |
|
Watson was about the last time IBM tried to do anything innovative before they just gave up and resigned themselves to their moribund but still profitable mainframe market.
|
# ¿ May 31, 2023 16:22 |
|
Tree Reformat posted:Yeah, I suppose its worth clarifying that the Chinese Room as an obfuscated backdoor argument for dualism is complete nonsense to me (as my own beliefs on the subject are hard deterministic materialism). I was more saying its been proven as an effective criticism about the difficulty of defining and testing for consciousness/sapience in others. Yeah I'm a Marxist and all of that, but it's not really much of a contradiction. Humans are physical systems and all of our externally-observable aspects can be explained by materialism - so that's probably a simpler and more parsimonious perspective if you're interested in economic matters.
|
# ¿ Jul 12, 2023 20:24 |
|
Tei posted:And at the same time, is ridiculous to believe you can't tell a person in this state can't be separate from a person with conscience. Conscience produce answers that would be different than "automatic reponses". The beavior of a person on this state would be immediatelly obvious weird. In fact, you're misunderstanding the entire concept of P-zombies. The whole idea is that they are non-conscious entities that are otherwise indistinguishable from conscious ones. cat botherer fucked around with this message at 21:20 on Jul 12, 2023 |
# ¿ Jul 12, 2023 21:16 |
|
Tei posted:But thats absurd and imposible, the idea is wrong. Humans would tell the difference. Doctor Malaver posted:We will see consciousness when the machine stops responding as instructed. Some engineer will write a prompt and there will be no answer. They'll look for problems in network, code, etc and there will be none. Just silence from the machine, or an unrelated response, one that doesn't even attempt to fulfill the prompt .
|
# ¿ Jul 12, 2023 21:49 |
|
Doctor Malaver posted:I can't tell if you're being ironical or what, but I don't see why refusing to obey orders wouldn't be a sign of consciousness in a system whose purpose is to obey orders. Of course, once you rule out technical issues. Basically, you are saying that if you can’t sufficiently predict the behavior of a system, the unexplainable behavior must be due to “consciousness,” as opposed to the simpler assumption that you don’t fully understand the thing. This is basically the same as the fallacious appeal to the “god of the gaps” as a proof of the existence of divinities.
|
# ¿ Jul 12, 2023 22:00 |
|
Doctor Malaver posted:The difference is that the child lived its experiences and AI had the experiences fed to it. But AI builds on top of them -- I assume it adds the prompts it is given and the feedback from its handlers to its calculations. They become sustained memory.
|
# ¿ Jul 13, 2023 15:04 |
|
SaTaMaS posted:The disruption isn't just people getting laid off due to AI, it's also jobs going unfilled due to being unable to find people with the necessary AI skills, which seems to be the bigger problem at the moment.
|
# ¿ Oct 12, 2023 00:29 |
|
Tei posted:My impresion is that the roles AI is going to need are:
|
# ¿ Oct 23, 2023 13:16 |
|
Everyone is rushing to get their customer-service chatbots out. In a vacuum, that could be worthwhile compared to paying call center employees, but I think the shine will wear off quick. They don't work particularly better than previous generations of chatbots for that purpose, and customers are extremely hostile to it.
|
# ¿ Nov 18, 2023 14:38 |
|
Pvt. Parts posted:Ehhh I dunno, part of the inherent value/appeal of ChatGPT and similar models is that they are very accessible and require little to no expertise to get quite a bit of value out of. You just describe in plain language what you want it to do or fetch and it does it for you. With some tweaking they can easily become the "search engines" of the future. Microsoft is certainly betting on that angle early with Copilot.
|
# ¿ Nov 20, 2023 13:05 |
|
Freakazoid_ posted:https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf
|
# ¿ Jan 8, 2024 17:52 |
|
Libertarian 2000s D&D is back again baby! Awoo! (wolf howl)
|
# ¿ Jan 24, 2024 03:13 |
|
fez_machine posted:I was thinking France and Korea
|
# ¿ Feb 2, 2024 00:43 |
|
Damnit Frontiers, stop giving open access journals a bad name!
|
# ¿ Feb 15, 2024 20:49 |
|
I think it will be quite some time before AI generated videos are convincing. For the time being, I don’t think that it would be broadly more useful than the animation equivalent of tweening. That isn’t nothing, or course, but it doesn’t mean that all video is untrustworthy and everything is lost.
|
# ¿ Feb 17, 2024 00:20 |
|
Quixzlizx posted:Is it just me, or is this less of an "AI grifting story" and more of a "grifters who happened to use AI" story? Like, maybe it lowered the amount of effort required, but all of the grifting elements could've easily been done before ChatGPT existed. They would've had to do a GIS/Pinterest search for the picture instead of entering an AI prompt.
|
# ¿ Mar 2, 2024 00:57 |
|
Staluigi posted:the humans in the matrix weren't there as a power source, their input was the only means by which ai could harvest original nonhallucinatory data
|
# ¿ Mar 6, 2024 14:20 |
|
SaTaMaS posted:Cool so you have no idea how LLMs or transformers work Current models fall especially short of human contextual understanding when you consider the superhuman amounts of information they are trained on. Humans are incredibly efficient at learning. Fundamentally, all current models boil down to advanced nearest-neighbors. They can learn embedding to make that far more effective, but they cannot extrapolate outside of space the training data occupies.
|
# ¿ Mar 15, 2024 16:07 |
|
SaTaMaS posted:The reason LLMs are bad at arithmetic isn't because of deep semantic information. Arithmetic involves almost so semantics, it's entirely syntax. However it involves executing precise, logically defined algorithmic operations, while LLMs are designed to predict the next word in a sequence based on learned probabilities. quote:Sure, but most people can't or don't extrapolate beyond "common sense" either.
|
# ¿ Mar 15, 2024 16:53 |
|
|
# ¿ May 20, 2024 19:06 |
|
GlyphGryph posted:For what its worth, I'm pretty sure robotics have been part of the openAI core mission from their founding. The company started with four explicitly stated goals and goal 2 is "build a robot". Goal 3 was natural language parsing, the one people commonly associate with the company, but it the robot one was actually first!
|
# ¿ May 7, 2024 23:40 |