|
Dead Reckoning posted:TBH, most humans can't create meaningful art or coherently talk about ethical philosophy, so we're probably closer than we think. Creating a robot Mozart or Einstein might be hard, but beating the intelligence of the average human is shockingly easy. the shockingly beatable intelligence of the average human is actually beyond our most advanced computers at the moment and we're not even close to being able to simulate such intelligence. also, the AIs that are "creating" art really aren't, they are simulating "art" based on their creators' notions and preconceptions about what art is and what is worthwhile art.
|
# ? Nov 28, 2016 10:35 |
|
|
# ? May 15, 2024 04:18 |
|
The answer to the Chinese Room Puzzle and the Turing Test stuff, to me, is that it's irrelevant if what's going on is "true" intelligence or if it has the "spark" of life or is "self-aware" or "human-level" or any of the other terms we try to use for the ineffable difference between us and the toaster. We'll have intelligence when we build something that we're not comfortable treating as not intelligent. If you want to get Cartesian you don't know for sure that anyone's intelligent except maybe yourself, you extend recognition to others because you have to and because it's hard not to when they demand it. If machines are made that are sufficiently autonomous and convincing then we might as well recognize them once it becomes socially awkward not to and leave the fine detail work to the philosophers. To that end, we'll make bigger gains in public perception with things that aren't even part of the core intelligence question, like improved natural language skills and better-engineered body-language. This is also not a bar limited by what's achievable in the finicky hypothetical-technical sense that people normally talk about, since humans if left alone will invest themselves in imaginary friends or pet rocks or a volleyball with a face drawn on it, sooner or later they'll make a Siri that's a little too good and we'll start having this conversation more seriously even if under the hood it's nothing new. Those are my two AM two cents.
|
# ? Nov 28, 2016 11:28 |
|
Dolash posted:The answer to the Chinese Room Puzzle and the Turing Test stuff, to me, is that it's irrelevant if what's going on is "true" intelligence or if it has the "spark" of life or is "self-aware" or "human-level" or any of the other terms we try to use for the ineffable difference between us and the toaster.
|
# ? Nov 28, 2016 11:41 |
|
Rush Limbo posted:Considering that AI requires so much computational power to tackle the problem of, say, stairs and even then fucks it up with enough regularity to be practically useless unless, for example, you devote 100% of its effort for it to incredibly slowly tackle the problem. How advanced are the state of the art stair climbing robots anyway (basically how many hundred sensors and negative feedback loops depending on those sensors)? Biologically speaking, climbing stairs isn't just a stereotyped motion but additionally needs to constantly adjust for minor imbalances in the inherently unstable human body as well as minor changes in the flatness and elasticity of the ground and poo poo. Anything that can measure only things like the angle between the femur and tibia and the overall body tilt, but not the forces acting on each segment of the leg and foot would be expected to fail miserably. I won't say no amount of AI can control a robot with dumb legs, but I definitely expect that putting a literal bucketload of cheap stretch sensors and poo poo into the leg makes the job orders of magnitude easier. suck my woke dick fucked around with this message at 13:55 on Nov 28, 2016 |
# ? Nov 28, 2016 13:50 |
|
Reveilled posted:I can't say I've ever met anybody like this to tell them. Do they really exist? The mainstream of AI ethics right now seems to be debating how big of a threat to humanity a general intelligence could be, rather than the ethical implications of loving one. Eh. I expect they're probably not actually representative of all AI ethics as a field, but much like the well known loudmouths in other fields they're very busy writing terrible guest articles and being interviewed for news website sci/tech and culture sections to the point where they drown out everything else.
|
# ? Nov 28, 2016 14:03 |
|
Stairs are mainly a problem for sensors and mechanical components than the AI, IMO. And it's not like it's an unsolvable problem: https://www.youtube.com/watch?v=tf7IEVTDjng&t=89s
|
# ? Nov 28, 2016 14:05 |
|
KOTEX GOD OF BLOOD posted:Let's first define what you mean by "human level AI," and by this I think you mean AI that causes a mind in the same way that a human brain causes a mind. I am going by this definition because no AI could truly be said to be "human-level" if it was incapable of semantic understanding. The follow on from that is to imagine systems where a group of humans simulates a living creatures brain. For example a house cat is estimated to have 760,000,000 neurons. So lets imagine we got the entire population of Earth to simulate a cats brain. We have developed some amazing future scanning technology that allows us to take a snapshot, and we build a perfect model using 760,000,000 people who have access to some kind of email/pager system that they use to receive and send signals to their other connections. The remaining 5+ billion people act as error checkers, fixing holes in the network as they arrive, bringing food and water to the participants and so on. It seems widely well accepted that many other animals have consciousness. In this situation, where our 760,000,000 people are replicating the behaviour of a cats brain, does that replica have consciousness? If so where does it reside?
|
# ? Nov 28, 2016 14:58 |
|
Dolash posted:The answer to the Chinese Room Puzzle and the Turing Test stuff, to me, is that it's irrelevant if what's going on is "true" intelligence or if it has the "spark" of life or is "self-aware" or "human-level" or any of the other terms we try to use for the ineffable difference between us and the toaster. Senor Tron posted:The follow on from that is to imagine systems where a group of humans simulates a living creatures brain.
|
# ? Nov 28, 2016 16:14 |
|
KOTEX GOD OF BLOOD posted:You are making several of the bad assumptions here which Dreyfus describes, or at minimum the biological and psychological assumptions. In engineering terms I'm sure it's extremely difficult/practically impossible, but is there any theoretical reason that assuming we could scan the current state of an animals brain exactly that we couldn't simulate it in such a way?
|
# ? Nov 28, 2016 16:27 |
|
Again, that relies on several assumptions about things we don't understand yet with regard to how brains cause minds. Think about the kind of knowledge that would be required for this simulation and you begin to see what I mean.
|
# ? Nov 28, 2016 16:58 |
|
Cingulate posted:Qualia isn't equal to information processing. Qualia isn't real / isn't important / isn't provable in anyone outside yourself / can be done without for the sake of social graces if the inert thing in front of you insists on its autonomy. Take your pick. KOTEX GOD OF BLOOD posted:It's relevant because the OP asked about human-level AI, not a Turing-safe simulation that approximates human responses. Even then, because of the inability of any AI technology we know of right now to understand and interpret meaning, past a certain point you are going to have a really hard time making something capable of convincingly simulating a system that understands it. Passing a Turing test is a lot different from a system capable of organically being a celebrated poet or artist in a way that isn't just mathematically miming others' art but a real expression of feeling – or even just simulating this. And I'm saying it's an illusion to go after the mythic "human-level AI", because in practice we'll settle for something less or different than ourselves. We don't test everyone we meet to see if they have "true understanding" by getting them to create some original poetry, we just see they appear to be humans and figure their experience must therefore be close to ours, so we extend a courtesy to them. A machine that speaks, travels, asserts its autonomy and protects its existence - whether it does these things by 'simulating' rather than 'truly understanding' - is from an outside perspective about as human as the man on the street. It''s like the "what is knowledge?" debate, where people keep pushing that knowledge has a special status beyond justified true belief. It's reliant on some unverifiable, internal but also universal properties that we can never seem to measure, yet somehow we get by day-to-day without a bullet-proof definition of knowledge because those justified true beliefs are close enough.
|
# ? Nov 28, 2016 19:31 |
|
For funsies' sake, is anyone up for defining clear, measurable criteria by which you will say "Yes, this AI is human-level" to even tell if you've accomplished the task in the end?
|
# ? Nov 28, 2016 19:55 |
|
Senor Tron posted:In engineering terms I'm sure it's extremely difficult/practically impossible, but is there any theoretical reason that assuming we could scan the current state of an animals brain exactly that we couldn't simulate it in such a way? just replicating a cat brain isn't ai, it's biology. if you can simulate a cat brain in another medium then you're onto something
|
# ? Nov 28, 2016 20:42 |
|
Sundae posted:For funsies' sake, is anyone up for defining clear, measurable criteria by which you will say "Yes, this AI is human-level" to even tell if you've accomplished the task in the end? a lot of philosophy says there is no possible criteria, so probably not
|
# ? Nov 28, 2016 20:43 |
|
Dolash posted:Qualia isn't real / isn't important / isn't provable in anyone outside yourself / can be done without for the sake of social graces if the inert thing in front of you insists on its autonomy. Take your pick. KOTEX GOD OF BLOOD posted:You are making several of the bad assumptions here which Dreyfus describes, or at minimum the biological and psychological assumptions. http://faculty.ucr.edu/~eschwitz/SchwitzAbs/USAconscious.htm
|
# ? Nov 28, 2016 21:02 |
|
I'm gonna say decades or sooner. The 'go' playing AI was a general intelligence and not really programmed with much structure. It viewed a series of go games, played against itself a bunch against itself, then smurfed a bunch of humans. Recursive learning, although narrow atm is already here. I also don't doubt that ai will be a thing. High resolution brain emulation could be a thing in a few decades, giving your ghost a path to immortality. It opens the possibility of a bunch of things, like having a philosophical discussion on combating entropy or you can have a reflection of yourself enjoying 24/7 IR sex orgy minigames. Idk go read the control problem if you want these ideas fleshed out better with more effort.
|
# ? Nov 28, 2016 23:14 |
|
Condiv posted:the shockingly beatable intelligence of the average human is actually beyond our most advanced computers at the moment and we're not even close to being able to simulate such intelligence. also, the AIs that are "creating" art really aren't, they are simulating "art" based on their creators' notions and preconceptions about what art is and what is worthwhile art. UR RIGHT every human that picks up an instrument/brush was doing so without a prior influence or cognitive sample that they were emulating from
|
# ? Nov 28, 2016 23:16 |
|
Cingulate posted:None of the picks, and the main point was that your post was confused about the difference between qualia and information processing. If the question is "how do we get human-level qualia for an artificial intelligence?", the answer is "you can't, it doesn't matter". You can't even prove that other human beings are experiencing qualia distinct from information processing, so it's a pointless goal for AI.
|
# ? Nov 28, 2016 23:17 |
|
blowfish posted:How advanced are the state of the art stair climbing robots anyway (basically how many hundred sensors and negative feedback loops depending on those sensors)? Biologically speaking, climbing stairs isn't just a stereotyped motion but additionally needs to constantly adjust for minor imbalances in the inherently unstable human body as well as minor changes in the flatness and elasticity of the ground and poo poo. Anything that can measure only things like the angle between the femur and tibia and the overall body tilt, but not the forces acting on each segment of the leg and foot would be expected to fail miserably. I won't say no amount of AI can control a robot with dumb legs, but I definitely expect that putting a literal bucketload of cheap stretch sensors and poo poo into the leg makes the job orders of magnitude easier. Its also the case in, say, video games and other AI simulations that they handle even fairly basic things like stairs incredibly badly, to the point where various workarounds are created including them just plain ignoring such changes in elevation from a purely AI standpoint and just moving them while having animation take over.
|
# ? Nov 28, 2016 23:19 |
|
Rush Limbo posted:Its also the case in, say, video games and other AI simulations that they handle even fairly basic things like stairs incredibly badly, to the point where various workarounds are created including them just plain ignoring such changes in elevation from a purely AI standpoint and just moving them while having animation take over. Videogame AI is meant to be the cheapest possible thing that can move a lot of pixel terrorists from the spawn point to in front of your ray gun without clipping through the walls, though, the people programming it are doing so towards completely unrelated priorities from the guy making an accurate stair-climbing simulator, or the guy making a robot that can efficiently climb actual stairs. If you want a robot that can go up stairs, but it doesn't need to do it from 50 different angles simultaneously and also render a simulated chainsaw bloodspray in 1080p resolution with dynamic lighting effects on all the droplets, a $5 Arduino, some gyroscopes, and a two-three hundred lines of code will do the trick. If you need it to climb stairs and also do everything else a human pair of legs might possibly successfully do in a lifetime, and on-the-fly generate and execute a reasonable response to leg emergencies the manufacturer never planned for, you're better off just getting on OKCupid and making a fresh human with its own built-in set of legs. A Wizard of Goatse fucked around with this message at 23:41 on Nov 28, 2016 |
# ? Nov 28, 2016 23:25 |
|
What we think the future will be like is probably about as accurate as what people in 1950s thought the future would be like: grossly, sometimes comically inaccurate. We can reasonably predict what 2021 will look like. Beyond that, it's anybody's guess. Slow News Day fucked around with this message at 23:40 on Nov 28, 2016 |
# ? Nov 28, 2016 23:38 |
|
Dolash posted:If the question is "how do we get human-level qualia for an artificial intelligence?"
|
# ? Nov 28, 2016 23:39 |
|
Rush Limbo posted:Its also the case in, say, video games and other AI simulations that they handle even fairly basic things like stairs incredibly badly, to the point where various workarounds are created including them just plain ignoring such changes in elevation from a purely AI standpoint and just moving them while having animation take over. Video game "AI" is its own can of worms and has no relation to academic or industrial AI. For one thing the goal in video game AI isn't to win. Creating a bot that beats the poo poo out of humans is trivial. The hard part is creating one that "acts" human with out kicking the humans rear end. The thing to keep in mind about AI is that we keep goal post shifting it. By the standards of early Computer Scientists Google, Wolfram Alpha, Alexa or a half dozen other things are AI. I expect this will continue indefinitely. We will always view "AI" as something to come, never what we have now.
|
# ? Nov 28, 2016 23:42 |
|
enraged_camel posted:What we think the future will be like is probably about as accurate as what people in 1950s thought the future would be like: grossly, sometimes comically inaccurate. I dont think that's true. I think it's obvious that technological progress is slowing down and will probably stagnate in our lifetimes. Compare the first half of the 20th century to the second half: the first half saw way more progress. Cars, airplanes, electric power, nuclear energy, radio, telephones, television, x-rays, and much more all came out in the period between 1900 and 1950, give or take. But now look at the period from 1950 to 2000, there's way less progress. Yeah computers got smaller and faster, we got video games and cell phones and internet stuff. Visual effects in movies got better. And...that's about it. The only really significant progress has been in computer chips. But now even that is ending, since Moore's Law will stop soon if it hasn't already. All the heady progress in computers that has been made over the past few decades will now stop. Computers in 30 years will probably be barely any better than computers today. Video game graphics probably aren't going to get any better. We're probably never going to be able to emulate a mammalian brain, even that of a mouse, let alone a human. And other fields, such as medicine, will be even slower. Drug development has slowed down dramatically. 2016 is basically 1986 except we got tablets and cellphones and social media. I think 2046 will be like 2016 almost exactly, at least technology-wise.
|
# ? Nov 28, 2016 23:58 |
|
Rush Limbo posted:Its also the case in, say, video games and other AI simulations that they handle even fairly basic things like stairs incredibly badly, to the point where various workarounds are created including them just plain ignoring such changes in elevation from a purely AI standpoint and just moving them while having animation take over. Video game ai is a bad metric for measuring where we are regarding ai. Development costs for stairs ai constrains stair ai. Also making that hardware run as a side process on 400$ consumer platforms constraints it.
|
# ? Nov 29, 2016 00:01 |
|
Blue Star posted:I dont think that's true. I think it's obvious that technological progress is slowing down and will probably stagnate in our lifetimes. Compare the first half of the 20th century to the second half: the first half saw way more progress. Cars, airplanes, electric power, nuclear energy, radio, telephones, television, x-rays, and much more all came out in the period between 1900 and 1950, give or take. But now look at the period from 1950 to 2000, there's way less progress. Yeah computers got smaller and faster, we got video games and cell phones and internet stuff. Visual effects in movies got better. And...that's about it. Next: an election run via twitter, self-driving cars, computers that talk to you, the end of coal in the West, ...
|
# ? Nov 29, 2016 00:14 |
|
Blue Star posted:I dont think that's true. I think it's obvious that technological progress is slowing down and will probably stagnate in our lifetimes. Compare the first half of the 20th century to the second half: the first half saw way more progress. Cars, airplanes, electric power, nuclear energy, radio, telephones, television, x-rays, and much more all came out in the period between 1900 and 1950, give or take. But now look at the period from 1950 to 2000, there's way less progress. Yeah computers got smaller and faster, we got video games and cell phones and internet stuff. Visual effects in movies got better. And...that's about it. Is this a joke post? Did you go out of your way to list mostly things from the 1800s as being from 1900 to 1950s?
|
# ? Nov 29, 2016 00:15 |
|
High level AI is a zillion years away. With no irony 5 minutes later opens up Google and types "that movie with the thing" and gets what he was looking for.
|
# ? Nov 29, 2016 00:21 |
|
The answer to this entire discussion is 'quit listening to the poo poo waterfall that pours out of Big Yud'. Human-intelligence AI will likely exist eventually as an accidental byproduct of something else more useful. However our current technology and understanding of sentience is such that it is inconceivable to even predict what we don't know with accuracy at this point.
|
# ? Nov 29, 2016 00:21 |
|
Unormal posted:High level AI is a zillion years away. I honestly think "AI" is more of a looks thing than anything meaningful. If you had a truly aware creature but it responded via database queries no one would ever rank it as AI even if it could do literally every single thing a human could but I bet if you stuck a 1990s chatbot in a realistic looking robot with a nice voice you'd have people arguing it deserves rights.
|
# ? Nov 29, 2016 00:31 |
|
Blue Star posted:2016 is basically 1986 except we got tablets and cellphones and social media. Industrial scale cloning is becoming a thing, genetically engineered crops are a fact of life rather than a terrifying novelty, We have had a continually manned outpost in orbit for fifteen years. While manned spaceflight has regressed beyond the space station, in terms of probes technology has made a leap forward. Rather than short lived landers there are mobile robotic rovers on Mars, one of which has been operating continuously for over a decade. Dismissing tablets and cellphones like that is a mistake, in the first world almost every individual has easy and cheap access to the sum total of human knowledge. Communication is a magnitude easier. Twenty years ago, international calls were expensive, I remember my mum having to ration herself to one Sunday evening call home to her sisters each week. Now you can contact someone on the other side of the planet as much as you want for no more than the cost of your monthly internet bill, or just use one of the widespread free wifi spots. Remember science fiction with videophones on the wall? They are now a mundane fact of life. 3d printing is rapidly becoming a serious technology, school students have access to industrial prototyping tools that would have been unimaginable thirty years ago, and then when they want to do something with that work they can with pocket change buy something as powerful as a 1980's supercomputer to be the brains for it. Electric vehicles have become big business. In many areas, house roofs are covered with cheap solar panels. While not cured, cancer treatment and survival rates have drastically increased. If caught early, HIV can be controlled to the point where it isn't the death sentence it once was. Virtual reality is now a consumer technology. Never mind the high end, cheap smartphone headsets give an experience better than that available at any cost in the 1980's. Those same supercomputers in our pockets have AR capabilities unimaginable in the 1980's. Ever used Google Translate? You can point your device at text in almost any language, and see it instantly changed to the language of your choice. Not just translated, actually visually changed to become something you can read. While living through the changes it's easy to overlook everything that is happening in the world, but I would argue the 1986 to 2016 difference is much larger than the 1956 to 1986 one. I mean what changed for the average person between 1956 and 1986 in terms of new technology? Televisions got better, audio players got smaller, phones got a bit better and computers shrank to where people could have them in their homes for only a few hundred dollars, but if you are talking about day to day life that period was more of an evolution whereas the information age has been a revolution.
|
# ? Nov 29, 2016 00:46 |
|
Blue Star posted:I dont think that's true. I think it's obvious that technological progress is slowing down and will probably stagnate in our lifetimes. Compare the first half of the 20th century to the second half: the first half saw way more progress. Cars, airplanes, electric power, nuclear energy, radio, telephones, television, x-rays, and much more all came out in the period between 1900 and 1950, give or take. But now look at the period from 1950 to 2000, there's way less progress. Yeah computers got smaller and faster, we got video games and cell phones and internet stuff. Visual effects in movies got better. And...that's about it. You're wrong. Take a look at this. Yes, it's from "singularityhub dot com" but it makes a lot of good points and has charts and stuff.
|
# ? Nov 29, 2016 01:18 |
|
Owlofcreamcheese posted:I honestly think "AI" is more of a looks thing than anything meaningful. If you had a truly aware creature but it responded via database queries no one would ever rank it as AI even if it could do literally every single thing a human could but I bet if you stuck a 1990s chatbot in a realistic looking robot with a nice voice you'd have people arguing it deserves rights. People already are.
|
# ? Nov 29, 2016 01:59 |
|
Senor Tron posted:Industrial scale cloning is becoming a thing, genetically engineered crops are a fact of life rather than a terrifying novelty, We have had a continually manned outpost in orbit for fifteen years. While manned spaceflight has regressed beyond the space station, in terms of probes technology has made a leap forward. Rather than short lived landers there are mobile robotic rovers on Mars, one of which has been operating continuously for over a decade. Dismissing tablets and cellphones like that is a mistake, in the first world almost every individual has easy and cheap access to the sum total of human knowledge. However, because the western middle class is becoming relatively poorer compared to the western upper class, people feel like they are getting the same poo poo as in 1980 except slightly smaller/faster and made in China.
|
# ? Nov 29, 2016 02:00 |
|
Blue Star posted:I dont think that's true. I think it's obvious that technological progress is slowing down and will probably stagnate in our lifetimes. Compare the first half of the 20th century to the second half: the first half saw way more progress. Cars, airplanes, electric power, nuclear energy, radio, telephones, television, x-rays, and much more all came out in the period between 1900 and 1950, give or take. But now look at the period from 1950 to 2000, there's way less progress. Yeah computers got smaller and faster, we got video games and cell phones and internet stuff. Visual effects in movies got better. And...that's about it. how old are you?
|
# ? Nov 29, 2016 03:08 |
|
Senor Tron posted:While living through the changes it's easy to overlook everything that is happening in the world, but I would argue the 1986 to 2016 difference is much larger than the 1956 to 1986 one. I mean what changed for the average person between 1956 and 1986 in terms of new technology? Televisions got better, audio players got smaller, phones got a bit better and computers shrank to where people could have them in their homes for only a few hundred dollars, but if you are talking about day to day life that period was more of an evolution whereas the information age has been a revolution. Just from memory + 15 minutes of research on Wikipedia: Unless I missed something in my research, jet airliners basically didn't exist in 1956 but were very common in 1986. Office workers still mostly used typewriters in 1956. By 1986, typewriters had largely been replaced by computers and dedicated electronic word processors. In terms of communication technology, while I can't say for sure because I didn't exist back then, pagers and answering machines (the latter existed in 1956, but became much more common by 1986) must have had an impact at least comparable to social media. Also, from what I gather, telegraph technologies greatly improved over the period. My mother is a computer programmer who started working in the 1970s, and she told me once that back then big companies had access to Telex systems that were pretty much equivalent to email. Portable audio recorders and players were expensive and pretty much only used by professional reporters in 1956. By 1986, portable cassette players/recorders were extremely common. I've used both an Ipod and an actual 1980s Walkman, so I can say from experience that the difference between them in terms of user experience isn't all that great. Cable TV, VCRs, and video rental stores all came into existence between 1956 and 1986. Again, I didn't exist back then, but the impact on personal entertainment must have been at least comparable to Tivo, Netflix, and Amazon Instant. Plus there's this. I'm not saying that I agree with Blue Star re: technological stagnation in general (I'd have to do a lot more research before I'd feel comfortable making a verdict), but 1956-1986 saw the introduction or at least widespread adoption of plenty of revolutionary technologies. INH5 fucked around with this message at 04:25 on Nov 29, 2016 |
# ? Nov 29, 2016 04:18 |
|
Sethex posted:UR RIGHT every human that picks up an instrument/brush was doing so without a prior influence or cognitive sample that they were emulating from All of them came to their own conclusions about what art is and pursues it, as opposed to a neural net which can only reflect the biases of its creators.
|
# ? Nov 29, 2016 12:51 |
|
Condiv posted:All of them came to their own conclusions about what art is and pursues it, as opposed to a neural net which can only reflect the biases of its creators. Actually, most artists don't know what the gently caress they are doing or why. Also, a neural network doesn't necessarily reflect the biases of its creators either. A neural network is trained using a method which you might imagine as similar to Pavlovian conditioning. The neural net is exposed to a set of training data, and the network's topology is reinforced if it produces a desirable answer, and mutated if it produces an undesirable answer. Eventually the network is arranged in a way that transforms the input into the corresponding desirable output. It is an uninteresting (in my opinion) numerical technique which results in a system which will happen to be correct within a given error value. It has more in common with something like a linear regression than human intelligence, despite its name.
|
# ? Nov 29, 2016 13:09 |
|
Dog Jones posted:Actually, most artists don't know what the gently caress they are doing or why. Yes I'm aware of how neural networks work, I've created one of my own. The desirable/undesirable answer is where the creators biases are introduced and why the network ends up reflecting those who train it. As for your "artists don't know what they're doing" argument: They don't need to for cognition to be there. Not all cognition is conscious
|
# ? Nov 29, 2016 13:18 |
|
|
# ? May 15, 2024 04:18 |
|
INH5 posted:Just from memory + 15 minutes of research on Wikipedia: In general the most profound changes the last couple of decades revolve around the internet and less tangible material shifts. We're living through one of the largest social and economic upheavals in human history but these changes are more subtle than a new electronic media player you can hold in your hand.
|
# ? Nov 29, 2016 13:24 |