|
Gynovore posted:Panpsychism doesn't actually explain anything, and it doesn't lead to any testable hypotheses. It sounds like something one would come up with after reading Something Deeply Hidden and smoking a few bowls. It's just animism/pantheism dressed up for the modern academic age, much the same as dualism is apologia for the concept of souls. Just like traditional religion, they are the refuge of those who don't want to accept we are just crude matter, and all the existential implications that means.
|
# ? Oct 5, 2023 22:22 |
|
|
# ? May 20, 2024 18:18 |
|
Tree Reformat posted:It's just animism/pantheism dressed up for the modern academic age, much the same as dualism is apologia for the concept of souls. Yeeeep. And man, people get real bothered when you start talking about there likely not being any soul or afterlife or anything, even in the most gentle, mutually consenting and respectful conversation. At least as best as we can tell.
|
# ? Oct 5, 2023 22:28 |
|
The part that always gets me is that we're multiple thinking things that think they are a single thinking thing because of how quickly they can communicate with each other within the skull.
|
# ? Oct 5, 2023 22:32 |
|
Bel Shazar posted:The part that always gets me is that we're multiple thinking things that think they are a single thinking thing because of how quickly they can communicate with each other within the skull. Split brain research is spooky as gently caress
|
# ? Oct 5, 2023 22:50 |
|
Mederlock posted:Split brain research is spooky as gently caress We agree with both of you
|
# ? Oct 5, 2023 23:29 |
|
Vector databases are cool but at the end of the day they're just about finding something from a sea of information that is similar to some other information you're searching for. It's really just an AI assisted similarity search. Still useful, but it isn't some kind of magic bullet that solves the hard problem of consciousness.
|
# ? Oct 6, 2023 01:39 |
|
Gynovore posted:Panpsychism doesn't actually explain anything, and it doesn't lead to any testable hypotheses. It sounds like something one would come up with after reading Something Deeply Hidden and smoking a few bowls. Well, first again I'm not arguing for panpsychism. They just talk a whole lot about incredibly basic forms of phenomenal consciousness and it's a good framework for talking about that, but none of that is unique to them. But, that's why like I said psychology is almost entirely behavioral, not phenomenal. Psychology is a science, philosophy of mind is a philosophy, I can't see much of any theory of consciousness being all that testable (probably why there's so little certainty in the whole thing.) This conversation kind of necessitates it though, psychology doesn't address the things people are talking about here. If you want to talk about AI consciousness from a purely behavioral perspective then, you'd be making a bad argument first but also I think you'd be a lot more likely to either say it's conscious or to say it's not conscious and we're not conscious either. No one's doing that though, we're not having a scientific conversation when we're talking about if an AI can be sentient or some kind of conscious or anything. That's a philosophical conversation. Though panpsychism attempts to explain why things are conscious, like all similar theories. It does not have explanatory power in the way a scientific theory would because... it's philosophy. Tree Reformat posted:It's just animism/pantheism dressed up for the modern academic age, much the same as dualism is apologia for the concept of souls. What the hell kind of panpsychists have you been talking to? Lol. Cuz, no. The question of why things are any kind of conscious at all just doesn't really have an answer. I don't see any way the argument that mind is an emergent property of completely un-mind-like matter is any better or less weird than it's an emergent property of mind-like matter but in an incredibly fundamental way coming together. Panpsychists in philosophy aren't out there being like "I tripped and realized the whole universe is alive and thinking!" Edit: I'd recommend reading the paper Chalmers wrote on panpsychism and seeing what they actually believe. It's nothing about photons have souls or there being an afterlife or rocks having a will. There was a guy on youtube who did a video about it a little bit ago and he completely misinterpreted it to being "this means the universe has a will!" when it absolutely doesn't. It's something where a lot of people seem to not really get what actual panpsychists in modern philosophy really believe, because I think they don't realize how incredibly basic phenomenal consciousness can be and start assigning attributes of a higher consciousness to it. BrainDance fucked around with this message at 04:18 on Oct 6, 2023 |
# ? Oct 6, 2023 03:03 |
|
KwegiboHB posted:I'm looking to avoid qualifying words like 'just', 'only', and 'merely' in this. average AI enjoyer (USER WAS PUT ON PROBATION FOR THIS POST)
|
# ? Oct 6, 2023 09:32 |
|
BrainDance posted:What the hell kind of panpsychists have you been talking to? Lol. Cuz, no. I don't think you're saying rocks have a mind of their own (outside of the computer game franchise Master of Orion), but can you explain what you mean by "phenomenal consciousness"? I don't personally think my pocket calculators have minds, either, but it still seems like a hard sell that my PC does. Or some server farm, whatever the physical location of WALL-E happens to be. I hope I am not being too dense here, but it seems somewhat given that animals possess some level of consciousness, they are self-aware and would like not being killed, and so on. This is not similarly true of computers. We can argue about whether a human being or a horse is fundamentally better in some logical manner as a Markov chain is, but the horse demonstrably has a theory of mind.
|
# ? Oct 6, 2023 12:28 |
|
I mean you might be able to create a 'conscious' ML model but it would have to be running continuously some self-aware learning loop outside of any interaction, which is not what current LLMs do.
|
# ? Oct 6, 2023 13:49 |
|
Private Speech posted:I mean you might be able to create a 'conscious' ML model but it would have to be running continuously some self-aware learning loop outside of any interaction, which is not what current LLMs do. It seems like three systems might be enough as I described one page ago in this very thread.
|
# ? Oct 6, 2023 14:00 |
|
KwegiboHB posted:It seems like three systems might be enough as I described one page ago in this very thread. No. (USER WAS PUT ON PROBATION FOR THIS POST)
|
# ? Oct 6, 2023 14:02 |
|
Well that settles that then, wrap it up everyone.
|
# ? Oct 6, 2023 14:03 |
|
Someone else took more time to reply to you before.
|
# ? Oct 6, 2023 14:04 |
|
Rappaport posted:I don't think you're saying rocks have a mind of their own (outside of the computer game franchise Master of Orion), but can you explain what you mean by "phenomenal consciousness"? I don't personally think my pocket calculators have minds, either, but it still seems like a hard sell that my PC does. Or some server farm, whatever the physical location of WALL-E happens to be. It's one of those things where it feels like the more in detail you explain it the further you get from what it is because it's intentionally an incredibly general thing, but generally that phrase I've been using, "there's something that it's like to be something" is phenomenal consciousness. Basically it's just "has subjective experience" or maybe "experiences qualia." Anything that can be said to have any kind of subjective experience. Like, there's something that it's like to be you. There's (possibly, depending on what you think) nothing that it's like to be a rock. We have a very complicated type of phenomenal consciousness though, and a whole lot more than just that. This definition very, very intentionally does not include all the features of consciousness beyond that. Like, experiencing the feeling of remembering something, that's a thing that happens with some phenomenal consciousness but it's not a necessary feature of phenomenal consciousness. The experience of seeing a red thing, that's definitely a thing that you need phenomenal consciousness to do, but it's not a requirement. The experience of thinking a thought, the experience of adding numbers, etc. The hard part is that this can be incredibly, incredibly basic. Everything we can possibly imagine experiencing is massively more complex than the most basic experience of phenomenal consciousness. Imagine you have no thoughts, no senses, no awareness of anything besides an incredibly subtle buzzing noise. You don't even have an awareness that you're the one experiencing this buzzing noise. You have no sense of if it's loud or quiet, no attributes of it, you cant reflect at all on it, all your entire experience down to the core is is that little far away "bzzz." This is still way more complex than potentially the most basic thing required of phenomenal consciousness. Or, think of being blackout drunk, you're sleeping and you have one of those drunk dreams that feels really really far away and subtle. Like incredibly out of focus and you can barely tell it happened. There are no physical qualities to it, it's more of a slight feeling. Then think about how that memory is gonna feel two days from then, like you only have a quick, very small sense of it. That is still incredibly more complex than would be required to be phenomenally conscious. This is sorta a basic idea of most philosophy of mind, not just panpsychists. What's unique to the panpsychists is they talk about these very basic levels of phenomenal consciousness and call them microexperiences. I'm intentionally saying "the experience of" too, because there's also psychological consciousness which is more the mechanisms and behavior of something. Phenomenal states, for animals, have corresponding attributes in psychological consciousness. Taking the experience of seeing red, if I describe how your brain sees red, the neurons that fire and stuff, and say that I've explained the psychological aspect of seeing red, but I haven't explained the phenomenal aspect, I haven't explained what your experience of seeing red is. Psychological consciousness can be way more easily explained than phenomenal consciousness. In short it's the lowest level of subjectiveness where it can still be said that there's something that it's like to experience it. Phenomenal consciousness is seen as a synonym for "sentience." Nagel's paper "What is it Like to Be a Bat? doesn't use the term I don't think but it uses a lot of the language that is used to describe it now, and is really talking about it a lot. It definitely explains why we use it to separate things from psychological consciousness and all the other consciousnesses/levels of consciousness.
|
# ? Oct 6, 2023 15:42 |
|
BrainDance posted:But, that's why like I said psychology is almost entirely behavioral, not phenomenal. Psychology is a science, philosophy of mind is a philosophy, I can't see much of any theory of consciousness being all that testable (probably why there's so little certainty in the whole thing.) This conversation kind of necessitates it though, psychology doesn't address the things people are talking about here. There so many fundamentals wrong with a lot of what you're saying, but these two paragraphs go pretty deep, and the most obvious is this: You are aware that science is a methodology for doing some types of philosophy better and more reliably, right? To help account for how incredible bad even the best philosophers are at doing even basic philosophy. They aren't two different completely unrelated things. Science literally exists to answer philosophical questions, that is why it was made, it is why it used.
|
# ? Oct 6, 2023 20:05 |
|
You are dust, and unto dust you shall return.
|
# ? Oct 6, 2023 21:45 |
|
GlyphGryph posted:They aren't two different completely unrelated things. Science literally exists to answer philosophical questions, that is why it was made, it is why it used. I guess if you are talking about modern science, or like people wondering if you can infinitelly divide a cake. Modern science is more like people looking at water with a magnifier and being "what the gently caress, is full of micro-bioids"
|
# ? Oct 6, 2023 22:09 |
|
What is going on with this thread lately, geez (USER WAS PUT ON PROBATION FOR THIS POST)
|
# ? Oct 6, 2023 22:27 |
|
Count Roland posted:What is going on with this thread lately, geez We're being haunted by cinci's ghost and he's laughing at us from the other side
|
# ? Oct 6, 2023 23:03 |
|
GlyphGryph posted:There so many fundamentals wrong with a lot of what you're saying, but these two paragraphs go pretty deep, and the most obvious is this: You are aware that science is a methodology for doing some types of philosophy better and more reliably, right? To help account for how incredible bad even the best philosophers are at doing even basic philosophy. What is fundamentally wrong with what I'm saying? I'm summarizing basic things in philosophy of mind... I'm not saying science is completely unrelated. This distinction between psychology being behavioral (which I can assure you is very true) and philosophy of mind working with the phenomenal when psychology avoids that isn't my idea, and is pretty well accepted. How do you propose science answers questions about phenomenal aspects of consciousness? If you're implying science exists to answer all questions philosophy is able to that's laughably wrong. Even viewing science as a subset of philosophy (only in a general sense, there is the philosophy of science, but even that itself isn't a question for science) still limits it to specific types of questions. Science did come from some earlier branches of philosophy, but "science literally exists to answer philosophical questions" is completely wrong about this. Science doesn't exist to answer phenomenal questions. It would be great if it could, you figure out a way and you'd definitely be getting published. The "To help account for how incredible bad even the best philosophers are at doing even basic philosophy" thng is just that kind of Neil deGrasse Tyson naive scientism. Science does well to answer what it's made to answer well, philosophy does well to answer what it's made to answer well. BrainDance fucked around with this message at 02:45 on Oct 7, 2023 |
# ? Oct 7, 2023 01:44 |
|
Tree Reformat posted:It's just animism/pantheism dressed up for the modern academic age, much the same as dualism is apologia for the concept of souls. Yeah but you don’t really know what you are talking about. Mederlock posted:Yeeeep. And man, people get real bothered when you start talking about there likely not being any soul or afterlife or anything, even in the most gentle, mutually consenting and respectful conversation. At least as best as we can tell. Several mainstream Christian denominations outright do not believe in an afterlife of a soul independent of bodies. The biblical account is of a resurrection of the physical body. Mind body dualism is a relatively modern development that mostly originated in the enlightenment (and not religion). You can goto most old churches and they often will physically be surrounded by or proximal to a graveyard, because those folks believed in the resurrection of a material body at the resurrection.
|
# ? Oct 8, 2023 20:34 |
|
BrainDance posted:What is fundamentally wrong with what I'm saying? I'm summarizing basic things in philosophy of mind... Psychology is not purely behavioral. Behaviorism is a subset of psychology. To say that science is not meant to answer questions of phenomena is just wrong. What is depression? What is anxiety? What is OCD? These are basic questions about the phenomenology of consciousness. They would never have been discovered without methodology to answer questions related to phenomena.
|
# ? Oct 11, 2023 07:21 |
|
DeeplyConcerned posted:Psychology is not purely behavioral. Behaviorism is a subset of psychology. Saying psychology is behavioral is not the same thing as saying all psychology is behaviorism. Behaviorism is a very specific thing that isn't summed up very quickly. Though, from a lot of the positions here from people, I don't see any other conclusion to make besides radical behaviorism if you follow that (radical behaviorism is wrong, most people think.) Counterintuitively, in this distinction, cognitive things (or how we can see them outside ourselves, actually, not the things themselves) are measured in a behavioral way. None of those questions you listed have phenomenal answers from psychologists. When we talk about something being phenomenal we talk about the actual experience of it. Not of someone saying "I feel really down" or measuring how down they are, but of what if genuinely feels like to feel "really down", the actual experience of being down. The distinction is like this to use the stupid stoner question everyones wondered. I can measure how well you see red. I can ask you if you're seeing red. I can do all sorts of things about you seeing red, I can watch your brain see red. This is the behavioral aspect of what we're talking about here with this distinction. I cannot go in and actually see what it's like for you to see red, this is what we're talking about with the phenomenal aspects of consciousness. Similar with depression I can find all sorts of ways to measure depression and diagnose it but this is all reliant on the behavioral aspects of it. Every single thing in the DSM or any diagnostic system is entirely un-phenomenological in that it's looking at the reported feeling or the physical form of the feeling or other things that are associated with the feeling, but that doesn't tell us what the feeling is itself. Like I said with the S&P research I did, we measured things involved in learning. That did not tell us what the actual experience of learning really feels like, or if it even exists, even though it pretty clearly does. What phenomenal consciousness is and the problem with it is described very well by Nagel's paper I linked a while ago. This is actually a really fundamental and basic thing that is in agreement by both psychologists and philosophers for the most part.
|
# ? Oct 11, 2023 08:16 |
|
Are there any interesting "reports" or whatever about how AI is being applied to society and the immediate, and potential long-term effects it has? For example, I feel it's been awfully quiet how corporations are switching over to AI. I've read one article about some game company that fired half of their staff to use AI instead (they used fancy, positive corporate words for this move) and how schools are having trouble with cheaters but not that much besides this. Here's some word poop: I got this nagging feeling that AI might supress and slow down human ingenuity as it is merged into current society. Just as any other technological leap, the pillars of society (like schools, governments or whatever) have a hard time to keep up and update, and it seems that we'll all have a personal AI assistant jacked into our brains by phones, computers and whatever fancy VR-glasses/lenses are around the corner soon enough. I'm too pessimistic not to think that everybody will all be leaning on AI for "laborous thinking" and as this trickles into all jobs and becomes an essential part of the job, there will not be any room for free thinking or thinking outside the box since time is money and using our brains takes time. It's kind of how information is being spoon fed to us by tiktoks and 10-word long tweets that don't give us any nuances or depth in the information given. We all have to do research and check multiple sources to confirm everything we consume to get a "full article" on something, and lots and lots of people don't do that in the world. I don't think we might be entirely hosed, but man does it feel like we're all gluttonous cats that have been given a years worth of cat food in one sitting and will eat until our stomachs explode.
|
# ? Oct 11, 2023 14:43 |
|
Imaginary Friend posted:Are there any interesting "reports" or whatever about how AI is being applied to society and the immediate, and potential long-term effects it has? For example, I feel it's been awfully quiet how corporations are switching over to AI. I've read one article about some game company that fired half of their staff to use AI instead (they used fancy, positive corporate words for this move) and how schools are having trouble with cheaters but not that much besides this. I work in the industry and am helping a fortune 100 build an AI system for internal use. These things are being used as glorified search engines with fancy summarization and suggestion features. They're pretty good at that kind of thing, but the gotcha is that the user needs to have the knowledge to really understand/validate what is being returned. Imagine hospitals trying to replace doctors with interns looking up symptoms on WebMD, and you can see where that idea falls down. It's better/more efficient than making your doctors look things up with literal textbooks, but you still need the doctors. Humans are bad at memorizing raw information, but pretty good at combining information and critical thinking. Generative AI mostly helps with the first thing by providing more relevant results. It's still pretty bad at the latter. KillHour fucked around with this message at 16:05 on Oct 11, 2023 |
# ? Oct 11, 2023 16:01 |
|
Imaginary Friend posted:Are there any interesting "reports" or whatever about how AI is being applied to society and the immediate, and potential long-term effects it has? For example, I feel it's been awfully quiet how corporations are switching over to AI. I've read one article about some game company that fired half of their staff to use AI instead (they used fancy, positive corporate words for this move) and how schools are having trouble with cheaters but not that much besides this. Here is a good one: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-AIs-breakout-year quote:There has been an explosive growth in the adoption of Generative AI (gen AI) tools within organizations. Within less than a year of their debut, about one-third of the surveyed organizations reported using gen AI in at least one business function quote:Industries relying most heavily on knowledge work are likely to see more disruption—and potentially reap more value. While our estimates suggest that tech companies, unsurprisingly, are poised to see the highest impact from gen AI—adding value equivalent to as much as 9 percent of global industry revenue—knowledge-based industries such as banking (up to 5 percent), pharmaceuticals and medical products (also up to 5 percent), and education (up to 4 percent) could experience significant effects as well. By contrast, manufacturing-based industries, such as aerospace, automotives, and advanced electronics, could experience less disruptive effects. This stands in contrast to the impact of previous technology waves that affected manufacturing the most and is due to gen AI’s strengths in language-based activities, as opposed to those requiring physical labor.
|
# ? Oct 11, 2023 17:23 |
|
SaTaMaS posted:Here is a good one: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-AIs-breakout-year quote:Industries relying most heavily on knowledge work are likely to see more disruption—and potentially reap more value. I strongly agree with this, except I'm not really seeing the effect on those industries as disruption, IME. The same people are doing the same kind of work, just with more tools. Let's take the example of a pharmaceutical company looking for new drugs. A lot of "new" drugs are actually new uses for old drugs. These are often found via Adverse Event Reporting. In other words, "I took a drug and something unexpected happened" - not necessarily bad, just unexpected. That kind of thing goes into a big national database that the FDA uses for safety monitoring, but it's also very useful for monitoring effects of drugs on much larger populations than a controlled trial. The thing is that those reports are mostly freeform text, with varying amounts of detail. Today, the best method for handling this is a technique called fact extraction. I'm not going to get too deep into it, but basically you hand-build a big graph of hierarchical relationships called an ontology, and you map that onto the text of whatever to pull out structured information about it. There are a bunch of industry-standard and top-secret proprietary ontologies for medicine. Here's a random example: https://bioportal.bioontology.org/ontologies/SCTO/. This is great and it works, but it's mostly pulling out words and phrases. It doesn't understand the text itself. An LLM can clean and summarize text before applying fact extraction, or you could have it do a vector encoding of the entire thing and store it in a vector database, or both. None of this is going to put an ontologist or data scientist or library sciences person out of work. It just gives them more tools to do the same core job of "find useful information in this gigantic avalanche of data." Edit: My point is that the real value in these technologies will be to integrate them into existing workflows, and industries that are adopting it already realize that. There is one kind of labor that it might replace - I was working with a customer on migrating their hand-maintained knowledge repo to a structured system, and the existing documents were so bad that automated fact extraction of the time couldn't pull out what they wanted. So they hired dozens of temps to go through 7k documents and hand-extract the information into excel sheets. This was incredibly time consuming and error prone, and literally nobody involved wanted to do it. The company didn't want to do it, the temp agency they contracted didn't want to do it (because it was very short term), the people brought on hated the work. It put some food on the tables of those temp workers for a couple months, but from an objective point of view, it was a very expensive and wasteful way of doing that compared to pegging corporate tax rates to GDP per capita growth or something. KillHour fucked around with this message at 18:49 on Oct 11, 2023 |
# ? Oct 11, 2023 18:31 |
|
The disruption isn't just people getting laid off due to AI, it's also jobs going unfilled due to being unable to find people with the necessary AI skills, which seems to be the bigger problem at the moment.
|
# ? Oct 11, 2023 20:17 |
|
SaTaMaS posted:The disruption isn't just people getting laid off due to AI, it's also jobs going unfilled due to being unable to find people with the necessary AI skills, which seems to be the bigger problem at the moment. If you're talking about "prompt engineering" or whatever, that's the flashy clickbait side of it that really isn't going to be a long-term impact. Most of these tools are getting integrated into larger systems where the interface will be abstracted away. For instance, the latest Dall-E uses ChatGPT to preprocess your prompt so you (theoretically - it's still far from perfect) don't have to know giant lists of magic key words to get a decent result. Lots of effort is being poured into making these things as transparent as possible and the prompt engineering stuff is for the early adoption nerds who want to play with it like a puzzle box (Not that there's anything wrong with that). Saying "I can't get a job because I don't know prompt engineering" is going to be like saying you can't get a job because you don't know how to Google search. If you're talking about how to build applications with Gen AI and deal with them at a low-level, that's just how the tech industry is. The unemployment lines aren't full of LAMP stack developers who were unable to learn React. I have a now-useless VMWare certification, but that didn't stop me from learning K8s. These things aren't black magick. Some people who were lucky enough to get in early are getting rich right now, and everyone else in the industry is frantically trying to learn it. I can tell you that poo poo went on my resume as soon as I was put in a project even tangentially related to it (not soon enough to be insta-rich, unfortunately). KillHour fucked around with this message at 21:32 on Oct 11, 2023 |
# ? Oct 11, 2023 21:26 |
|
SaTaMaS posted:Here is a good one: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-AIs-breakout-year
|
# ? Oct 11, 2023 22:46 |
|
SaTaMaS posted:The disruption isn't just people getting laid off due to AI, it's also jobs going unfilled due to being unable to find people with the necessary AI skills, which seems to be the bigger problem at the moment.
|
# ? Oct 12, 2023 00:29 |
|
cat botherer posted:I’m an unemployed data scientist, and that’s news to me. The data scientist job market is in the toilet worse than the rest of the tech industry, despite it being the “AI” profession. Two possibilities to throw out there - 1. Companies just finished a multi-year cycle of tech spending fueled by low interest rates and the "digital transformation" fad. Maybe we're still at the start of the new cycle. 2. With all the generative AI hype, maybe companies think they don't need any original data science work and can use the new LLM based tools that are being released at a breakneck pace.
|
# ? Oct 12, 2023 01:06 |
|
In China right now there are a ton of students who got specific degrees in AI, as a field. There are a whole lot of new jobs, too, but there is a mismatch between what the jobs want and what the students have so all of these AI students are struggling to actually get the jobs. I don't think China is alone here and I don't see there being a lack of people to fill those jobs, if anything there are going to be too many people if it just doesn't end up needing all the college students who are gambling on it being as massive as they think it will be (or being massive, but just not requiring that many people in the end.) I can see it potentially becoming something similar to library science where you need that degree, but you also need a degree in something else to go along with it for it to be worth all that much. https://www.sixthtone.com/news/1013462
|
# ? Oct 12, 2023 03:21 |
|
I relatively recently got put on the AI team in the second largest tech company (still can't figure out how that happened), and the big hopes currently in application are in medical, helping with energy consumption, accessibility, and making general humanitarian aid more effective. If you're talking things that purely help people outside of making things easier or personalizing things.
|
# ? Oct 13, 2023 20:49 |
|
Darko posted:I relatively recently got put on the AI team in the second largest tech company (still can't figure out how that happened), and the big hopes currently in application are in medical, helping with energy consumption, accessibility, and making general humanitarian aid more effective. If you're talking things that purely help people outside of making things easier or personalizing things. Biotech companies are gobbling up AI tech companies too so that they can do cell therapy testing at scale. Its pretty amazing to see.
|
# ? Oct 18, 2023 02:55 |
|
SaTaMaS posted:The disruption isn't just people getting laid off due to AI, it's also jobs going unfilled due to being unable to find people with the necessary AI skills, which seems to be the bigger problem at the moment. At least in my field a lot of stuff that claims it needs AI skills do not. Prompt engineering is just laughable as a skill, it's not hard to learn and any of the difficulty that currently exists will be phased out as tooling improves. 99% of Engineering roles requesting AI skills are also mostly silly as all you are really doing is integrating into an API which any engineer worth their salt can do. The real skills will be in creating and training custom models, and I mean real training, not what KwegiboHB thinks they are doing. Mega Comrade fucked around with this message at 09:30 on Oct 18, 2023 |
# ? Oct 18, 2023 09:20 |
|
My impresion is that the roles AI is going to need are: - Math people, into statistic and the type of statistics popular modern AI uses - System administrators that can goble togueter the multiple pieces that make a modern AI system breathe - Continuous Integration / Developers for everything else. Because in the end, even ChatGPT need somebody to make a website Programmers are easy to come by. CI people maybe harder?, system administrators are also plenty, but maybe already taken? AI-Math people is the people harder to find. "Prompt engineer" or that stuff, I think it would be extra rare?, we are not there yet.
|
# ? Oct 22, 2023 16:49 |
|
Tei posted:My impresion is that the roles AI is going to need are:
|
# ? Oct 23, 2023 13:16 |
|
|
# ? May 20, 2024 18:18 |
|
Tei posted:My impresion is that the roles AI is going to need are: Just the opposite, at the moment the AI-Math job market is saturated, while the app developers creating the apps which use AI APIs are harder to find.
|
# ? Oct 23, 2023 13:59 |