Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Yngwie Mangosteen
Aug 23, 2007
ah yes the very real and probable thing 'a hyperintelligent AI that uses processing power to generate and torture copies of every person that didn't immediately blow peter thiel while he funded its creation.'

Adbot
ADBOT LOVES YOU

steinrokkan
Apr 2, 2011



Soiled Meat
That simulation thing doesn't even make basic sense to me. So having a million objects flying one mile under the speed of light is fine, but one object flying faster than light would crash the CPU? That doesn't seem how this should work. The number of interactions creating complexity in a system should be the limiting factor, not arbitrary velocity. And given that all objects can be said to interact with all other objects in some infinitesimally small way, it isn't apparent why heir speed of all things would be such a big deal given the interactions are already being calculated, you are just using bigger floats. Or is speed of light the biggest possible value of the variable type used to store the velocity vector or some other such silly assumption?

Also why wouldn't the simulation just take longer to process "frames" if overloaded? The subjects wouldn't be able to determine that one second of simulation time takes five minutes of real time to model, and another takes only two minutes etc.

Raenir Salazar
Nov 5, 2010

College Slice

Captain Monkey posted:

ah yes the very real and probable thing 'a hyperintelligent AI that uses processing power to generate and torture copies of every person that didn't immediately blow peter thiel while he funded its creation.'

You're being disingenuous. There are two halves to the idea (which I made clear in my post you apparently refused to read), one half is a very real and serious cause of concern among AI researchers, the rest is a weird forum alternate reality rpg premise.

steinrokkan posted:

That simulation thing doesn't even make basic sense to me. So having a million objects flying one mile under the speed of light is fine, but one object flying faster than light would crash the CPU? That doesn't seem how this should work. The number of interactions creating complexity in a system should be the limiting factor, not arbitrary velocity. And given that all objects can be said to interact with all other objects in some infinitesimally small way, it isn't apparent why heir speed of all things would be such a big deal given the interactions are already being calculated, you are just using bigger floats.

Also why wouldn't the simulation just take longer to process "frames" if overloaded? The subjects wouldn't be able to determine that one second of simulation time takes five minutes of real time to model, and another takes only two minutes etc.

You could explain things like cold fusion experiments that couldn't be reproduced as being a result of glitches like the kinda you see in software.

steinrokkan
Apr 2, 2011



Soiled Meat

Raenir Salazar posted:

You're being disingenuous. There are two halves to the idea, one half is a very real and serious cause of concern among AI researchers, the rest is a weird forum alternate reality rpg premise.

One of those halves is fictional.

steinrokkan
Apr 2, 2011



Soiled Meat

Raenir Salazar posted:

You could explain things like cold fusion experiments that couldn't be reproduced as being a result of glitches like the kinda you see in software.

Yeah, I could, if I were a teenager who just got out of the theater after seeing Matrix. But hopefully I would grow out of that phase soon.

Raenir Salazar
Nov 5, 2010

College Slice

steinrokkan posted:

One of those halves is fictional.

Technically, both halves are fictionally because hyperintelligent AI don't yet exist, but there's no reason to believe they won't eventually exist. The rest of the thought experiment is nonsense though.

steinrokkan posted:

Yeah, I could, if I were a teenager who just got out of the theater after seeing Matrix. But hopefully I would grow out of that phase soon.

I'm not sure what that has to do with anything though. You were asking a question within the context of the "the universe is a simulation" theory yes? If assumed to be true, I'm not sure how its not a potential answer to your question.

ashpanash
Apr 9, 2008

I can see when you are lying.

Raenir Salazar posted:

They really aren't though. Roko's Basilisk assumes the inevitability of a hyperintelligent AI, a very real and reasonable thing

Is it, though? We don't even have examples of true "general intelligences" in nature - I'd argue that human intelligence is pretty fantastic but is still limited and can only be augmented so much before we get diminishing returns. There's no real demonstrable evidence that both a true general intelligence can be created and such a general intelligence would be capable of exponentially iterating itself to create a 'hyperintelligence.' Both of which would need to be true for such a thing to be inevitable, right?

Yngwie Mangosteen
Aug 23, 2007

ashpanash posted:

Is it, though? We don't even have examples of true "general intelligences" in nature - I'd argue that human intelligence is pretty fantastic but is still limited and can only be augmented so much before we get diminishing returns. There's no real demonstrable evidence that both a true general intelligence can be created and such a general intelligence would be capable of exponentially iterating itself to create a 'hyperintelligence.' Both of which would need to be true for such a thing to be inevitable, right?

Exactly. It's a pipe dream in the same way the rest are 'it COULD exist, so it MUST exist [either now or in the future]' is a weak, silly argument.

I don't think I'm the one not reading posts, heh.

Raenir Salazar
Nov 5, 2010

College Slice

ashpanash posted:

Is it, though? We don't even have examples of true "general intelligences" in nature - I'd argue that human intelligence is pretty fantastic but is still limited and can only be augmented so much before we get diminishing returns. There's no real demonstrable evidence that both a true general intelligence can be created and such a general intelligence would be capable of exponentially iterating itself to create a 'hyperintelligence.' Both of which would need to be true for such a thing to be inevitable, right?

If you happened to be taking care of some colonies of ants like I do it might give you a different perspective on this. :D

But remember of course that "reasonableness" doesn't mean its gauranteed; maybe we won't ever, but creating an artificial intelligence that can mimic intelligence as we observe in nature would be strange to me that somehow we could never accomplish it.

I'm not sure if this is the right essay I am thinking of but even Turing way back when, seems to make a pretty good argument that there's no substantiate reason to think its impossible, just very difficult.

Its easy to see that given 10 years we probably won't make that much practical progress, although looking at different papers I see interesting things being done that might yield interesting results on a 10 year time span, but what about 100 years? 1000 years? 10,000 years? We can't really fathom scientific progress on those time scales.

Captain Monkey posted:

Exactly. It's a pipe dream in the same way the rest are 'it COULD exist, so it MUST exist [either now or in the future]' is a weak, silly argument.

I don't think I'm the one not reading posts, heh.

Not really, people much smarter have considered this question and wrote substantive arguments as to why they think its reasonable it isn't appeal to vague probabilities of "it could maybe?" like you're presenting it here, its just the best some of us in the thread can consider the problem.

Beelzebufo
Mar 5, 2015

Frog puns are toadally awesome


One thing I've always wondered is what the hyperintelligent AI is running on. Wouldn't the exponential complexity require exponentially more energy and matter, and be subject to heat dissipation and other problems like that? I always hear about this AI taking over all the computers when it's created, but even then you have things like network lag and packet errors and such that would break this perfect unity up. It feels like the AI argument just assumes exponential technological progress in computing power forever, despite some pretty obvious hard limits to how small you can make bits, or how quickly you can dissipate heat.


E: Ok, but the argument between "we may someday create a digital lifeform that approximates a human brain, or even surpasses it somewhat" is different than saying "this being will at some point have access to such limitless amount of energy and computing power that it will dedicate a substantial amount of it to simulating human brains with a fidelity equal to a real brain". Even without the digital torture, why assume that this hyperintelligence would budget any amount of energy for this purpose?

Beelzebufo fucked around with this message at 18:42 on Apr 7, 2021

Yngwie Mangosteen
Aug 23, 2007

Beelzebufo posted:

One thing I've always wondered is what the hyperintelligent AI is running on. Wouldn't the exponential complexity require exponentially more energy and matter, and be subject to heat dissipation and other problems like that? I always hear about this AI taking over all the computers when it's created, but even then you have things like network lag and packet errors and such that would break this perfect unity up. It feels like the AI argument just assumes exponential technological progress in computing power forever, despite some pretty obvious hard limits to how small you can make bits, or how quickly you can dissipate heat.


E: Ok, but the argument between "we may someday create a digital lifeform that approximates a human brain, or even surpasses it somewhat" is different than saying "this being will at some point have access to such limitless amount of energy and computing power that it will dedicate a substantial amount of it to simulating human brains with a fidelity equal to a real brain". Even without the digital torture, why assume that this hyperintelligence would budget any amount of energy for this purpose?

smart guys have thought about this so the discussion is over and anyone who disagrees is not smart.

Bug Squash
Mar 18, 2009

The point of the Boltzmann brain thought experiment is that we don't seem to exist in a cosmology where Boltzmann brains overwhelmingly outnumber normal brains. It implies that currently observed physics won't continue indefinitely into an infinitely old universe.

The arguement against them in the thread seems to be that there is a level of improbableness where it becomes literally the same as impossible, which while it seems intuitively sensible raises some massive philosophical issues. Like, where along the spectrum of improbable things do you cross the threshold of something becoming impossible despite still being theoretically possible?

Yngwie Mangosteen
Aug 23, 2007
It's like pornography. I can't define for you when an idea is too stupid to consider, but I know it when I see it.

Libluini
May 18, 2012

I gravitated towards the Greens, eventually even joining the party itself.

The Linke is a party I grudgingly accept exists, but I've learned enough about DDR-history I can't bring myself to trust a party that was once the SED, a party leading the corrupt state apparatus ...
Grimey Drawer

DrSunshine posted:

We should terraform the moon by smooshing 4x its mass into it and make it a second earth, and then colonize that earth and use nanotechnology and genetic engineering to make anime real.

eXXon posted:

Having 4x stronger tides would probably wreck some poo poo here.

The real solution is to go back in time 4.5Gyr and make Theia strike a more glancing blow and have a smaller Earth but much more massive moon. Even better if it ends up closer than the moon is today; there would be some wicked rad tides on both planets.

Or we could just invent energy shields and put big generators on the moon to project a field to keep atmosphere inside, and then fill it up. Takes a lot less resources. :v:


Beelzebufo posted:

In fact, I struggle to think how you would explain the laws of physics without the speed of light. It's not like that value is incidental to other parts of physics. Like I guess the simulator gods just decided to tie mass energy equivalence to the processing speed of the simulation for.... what reason exactly?

I guess this would be another universe-ending cataclysm, like false vacuum collapse: The simulationists decide one day to do a hardware upgrade for their simulation, when they switch everything back on the speed of light is now 100x faster, the program immediately collapses because everything in the code is bound to the processing speed and doesn't work right anymore


Raenir Salazar posted:

If we do live in a simulation:

(a) maybe we can hack it like in the Matrix to get special powers.
(b) Maybe death isn't permanent, maybe there is like a "memory buffer in the sky" that functions as an afterlife, or get recycled back in like reincarnation?

This would also open the way for demons and horrors to exist, as some sort of virus or glitch traveling through the simulation

ashpanash
Apr 9, 2008

I can see when you are lying.

Raenir Salazar posted:

If you happened to be taking care of some colonies of ants like I do it might give you a different perspective on this. :D

Ants can do some amazing things, but I think it'd be hard to argue that they are Turing-complete. For example, there are ants that can build amazing mound-like structures that can reach several feet high - and yet, there is no known ant colony that would 'think' to make such a structure in order to access a food source. Perhaps a billion billion iterations with a particular applied selective pressure may indeed cause that behavior to emerge, but likely at the cost of some loss of functionality in some other system.

Perhaps, a super-advanced hyper-intelligence will discover a way to get around the laws of thermodynamics and discover that you can, in fact, get a free lunch. Perhaps. But doesn't the sheer unlikelihood of such a scenario make the basilisk proposal exponentially more untenable and unlikely? Perhaps even preposterous? A one in a Tree(3) ^ Graham's Number chance is effectively zero, and thus not really something to consider worth worrying about.

As a creepy story, though, fun and effective. I'll give it that.

Raenir Salazar
Nov 5, 2010

College Slice

ashpanash posted:

Ants can do some amazing things, but I think it'd be hard to argue that they are Turing-complete. For example, there are ants that can build amazing mound-like structures that can reach several feet high - and yet, there is no known ant colony that would 'think' to make such a structure in order to access a food source. Perhaps a billion billion iterations with a particular applied selective pressure may indeed cause that behavior to emerge, but likely at the cost of some loss of functionality in some other system.

Perhaps, a super-advanced hyper-intelligence will discover a way to get around the laws of thermodynamics and discover that you can, in fact, get a free lunch. Perhaps. But doesn't the sheer unlikelihood of such a scenario make the basilisk proposal exponentially more untenable and unlikely? Perhaps even preposterous? A one in a Tree(3) ^ Graham's Number chance is effectively zero, and thus not really something to consider worth worrying about.

As a creepy story, though, fun and effective. I'll give it that.

Woahwoahwoah, they absolutely do. They are army ants in Australia that make structures to reach wasp nests and so on.

As for creepy story I think paper clip optimizers are a little creepier and more "conceivable" because humans (and ants) on some level display paperclip optimizer-like behaviour despite from a big-picture perspective, such optimization being disadvantageous to our own long term survival. Roko's Basilisk requires too much explanation and buy-in to be creepy to anyone outside of Less Wrong forums.

ashpanash
Apr 9, 2008

I can see when you are lying.

Raenir Salazar posted:

Woahwoahwoah, they absolutely do. They are army ants in Australia that make structures to reach wasp nests and so on.

Is that so? Wow! Color me impressed. I gotta check that out.

Yngwie Mangosteen
Aug 23, 2007

ashpanash posted:

Is that so? Wow! Color me impressed. I gotta check that out.

They just stretch across each other toward food. It's been studied mathematically and is nowhere near as impressive as Raenir is saying.


https://www.quantamagazine.org/the-simple-algorithm-that-ants-use-to-build-bridges-20180226/


https://www.youtube.com/watch?v=9_mNC81fnyY

There's an enormous difference between 'keep walking and let other allies walk over you toward the scent of food that happens to form a bridge' and 'builds a structure using other materials to reach food'. And it's super disingenuous to claim otherwise.

Raenir Salazar
Nov 5, 2010

College Slice

ashpanash posted:

Is that so? Wow! Color me impressed. I gotta check that out.

Here you go!: https://www.youtube.com/watch?v=bUNKQqCRFuQ

Technically this structure is made out of themselves but it's still a pretty important feat; and there are many ant species that maintain interesting properties like solenopsis invicta who form basically a kind of material out of themselves. https://www.popularmechanics.com/science/animals/a9759/a-mob-of-fire-ants-becomes-a-new-kind-of-material-16202096/

Captain Monkey posted:

They just stretch across each other toward food. It's been studied mathematically and is nowhere near as impressive as Raenir is saying.


https://www.quantamagazine.org/the-simple-algorithm-that-ants-use-to-build-bridges-20180226/


https://www.youtube.com/watch?v=9_mNC81fnyY

There's an enormous difference between 'keep walking and let other allies walk over you toward the scent of food that happens to form a bridge' and 'builds a structure using other materials to reach food'. And it's super disingenuous to claim otherwise.

I feel like you're entirely missing the point and context of the discussion.

Yngwie Mangosteen
Aug 23, 2007

Raenir Salazar posted:

I feel like you're entirely missing the point and context of the discussion.

I'm not the one missing the point 'keep walking toward food and tightly hold onto the ant below you, and let another friendly ant above you grip on as it, too, reaches for food' is incredibly simple. I even provided you the algorithm behind it for you.

It's very, very different than something that requires planning, foresight, and decision-making at the level that being Turing-complete requires.


edit: Like these ants wouldn't be able to extrapolate this to outside contexts. It's the same thing as a hastily written chatbot having a few interesting responses to pre-set questions or keywords.

Bug Squash
Mar 18, 2009

Captain Monkey posted:

It's like pornography. I can't define for you when an idea is too stupid to consider, but I know it when I see it.

That's a feelgood answer, but we're dealing with hard nosed numbers.

Let's boil it down to rolling 10 sided dice. Rolling a 10 on one is a 1 in 10 chance. Add another dice, and rolling two tens is 1 in 100. Keep adding dice, and eventually you get the odds of the spontaneous creation of a Boltzmann brain. At what point along the path of adding the dice did adding one more dice change the odds to literally zero?

Most physicists believe Boltzmann's brains are ruled out by the assumptions needed for their existence being wrong (ie. Infinitely long future of quantum fluctuations), which is a fairly reasonable assumption.

Yngwie Mangosteen
Aug 23, 2007

Bug Squash posted:

That's a feelgood answer, but we're dealing with hard nosed numbers.

Let's boil it down to rolling 10 sided dice. Rolling a 10 on one is a 1 in 10 chance. Add another dice, and rolling two tens is 1 in 100. Keep adding dice, and eventually you get the odds of the spontaneous creation of a Boltzmann brain. At what point along the path of adding the dice did adding one more dice chance the odds to literally zero?

Most physicists believe Boltzmann's brains are ruled out by the assumptions needed for their existence being wrong (ie. Infinitely long future of quantum fluctuations), which is a fairly reasonable assumption.

I dunno, that was my opinion on the matter and you didn't like it. It's fuzzy, like most things in reality.

Raenir Salazar
Nov 5, 2010

College Slice

Captain Monkey posted:

edit: Like these ants wouldn't be able to extrapolate this to outside contexts. It's the same thing as a hastily written chatbot having a few interesting responses to pre-set questions or keywords.

But by all evidence, they actually can and do. Freakin' pharaoh ants can live inside my goddamn computer! (And they did! Until I caught them) There seems to have been for some species of ants, no environment they couldn't adapt to. Similar to humans.

Yngwie Mangosteen
Aug 23, 2007

Raenir Salazar posted:

But by all evidence, they actually can and do. Freakin' pharaoh ants can live inside my goddamn computer! (And they did! Until I caught them) There seems to have been for some species of ants, no environment they couldn't adapt to. Similar to humans.

I'm not sure how 'can inhabit a space you don't expect them to inhabit' is quite the same as 'is a fully intelligent being that can pass a turning test' but you maybe need to do a bit more reading on this sort of thing.

typhus
Apr 7, 2004

Fun Shoe
While not explicitly about space, there's some news today about hints of new physics that is pretty exciting.

quote:

Evidence is mounting that a tiny subatomic particle called a muon is disobeying the laws of physics as we thought we knew them, scientists announced on Wednesday.

The best explanation, physicists say, is that the muon is being influenced by forms of matter and energy that are not yet known to science, but which may nevertheless affect the nature and evolution of the universe. The new work, they said, could eventually lead to a breakthrough in our understanding of the universe more dramatic than the heralded discovery in 2012 of the Higgs boson, a particle that imbues other particles with mass.

Muons are akin to electrons but far heavier. When muons were subjected to an intense magnetic field in experiments performed at the Fermi National Accelerator Laboratory, or Fermilab, in Batavia, Ill., they wobbled like spinning tops in a manner slightly but stubbornly and inexplicably inconsistent with the most precise calculations currently available. The results confirmed results in similar experiments at the Brookhaven National Laboratory in 2001 that have tantalized physicists ever since.

“This quantity we measure reflects the interactions of the muon with everything else in the universe,” said Renee Fatemi, a physicist at the University of Kentucky. “This is strong evidence that the muon is sensitive to something that is not in our best theory.”

Beelzebufo
Mar 5, 2015

Frog puns are toadally awesome


Sorry about what about ants is analogous to a hyperintelligence iterating itself infinitely? Last time I checked ants still needed matter and energy to function, and even if for the sake of argument you can claim that their collective behaviour constitutes some rudimentary emergent "intelligence", eventally you wouldn't be able to coordinate actions across the whole network. The big multimillion ant "supercolonies" are just chemically recognizing each other, they aren't sending phereomone waves across the entirety of the "network", so if anything it sort of proves the point that a hyperintelligence would be extremly fragmented if it tried to operate in some sort of distributed sense.

E: I don't really have a problem with the idea of an AI potentially posing a threat to humans, the issue I have with all these hyperintelligences is that it sort of handwaves thermodynamics and physics away as technical problems the AI will solve on the path to omnipotence. That's where it's hard to get behind these though experiments as plausible.

Beelzebufo fucked around with this message at 19:32 on Apr 7, 2021

Yngwie Mangosteen
Aug 23, 2007
Yeah your last edit really hits the nail on the head. People with a half assed understanding think of it like magic and... it’s not.

Bug Squash
Mar 18, 2009

Captain Monkey posted:

I dunno, that was my opinion on the matter and you didn't like it. It's fuzzy, like most things in reality.

Ok, but you do believe that there is a point where adding one extra dice flips a probability from possible to impossible, even if pin pointing the exact point isn't possible. That is the logical conclusion of what you've said.

I don't think I'm unreasonable in saying that that raises some major issues.

The Chad Jihad
Feb 24, 2007


"If reality is a simulation, then magic is hacking tools and Angel's are server admins" is a fun story prompt but a very silly thing to actually consider

Raenir Salazar
Nov 5, 2010

College Slice

Beelzebufo posted:

Sorry about what about ants is analogous to a hyperintelligence iterating itself infinitely? Last time I checked ants still needed matter and energy to function, and even if for the sake of argument you can claim that their collective behaviour constitutes some rudimentary emergent "intelligence", eventally you wouldn't be able to coordinate actions across the whole network. The big multimillion ant "supercolonies" are just chemically recognizing each other, they aren't sending phereomone waves across the entirety of the "network", so if anything it sort of proves the point that a hyperintelligence would be extremly fragmented if it tried to operate in some sort of distributed sense.

E: I don't really have a problem with the idea of an AI potentially posing a threat to humans, the issue I have with all these hyperintelligences is that it sort of handwaves thermodynamics and physics away as technical problems the AI will solve on the path to omnipotence. That's where it's hard to get behind these though experiments as plausible.

I think there's some confusion as to what we're talking about. I don't think we're talking about a hitherto magic intelligence that doesn't exist without biomechanical limitations like energy and mass. Nor are ants perfectly naturally analogous to a speculated superintelligence; but it is amusing that there is a similar sort of "Well X could never do Y! (Until they do)" interaction happening here that mirrors the same interaction Turing predicted.

Like A bunch of rocks can be Turing complete; so ants could display behaviour that might be similar isn't completely unreasonable even if its chemicals instead of electricity but again I'm not saying Ants can become the same as a superintelligent ai (although nothing suggests ants couldn't theoretically evolve to become sentient and self aware like humans, I'm not sure if that would be the same as superintelligence but they're probably comparable).

So if the argument goes, "Well we can't one day make superintelligent AI because we can't even observe it in nature", pointing out the existence of ants isn't to suggest that ants are interchangeable with superintelligence, but to point out a viable and theoretical path that a superintelligence via emergance behavior could one day be created from those principles.

And this is mainly focused on superintelligence that are interchangeable with paperclip optimizers, to say nothing about sentience or self-awareness which I think is basically a separate topic.

Yngwie Mangosteen
Aug 23, 2007

Bug Squash posted:

Ok, but you do believe that there is a point where adding one extra dice flips a probability from possible to impossible, even if pin pointing the exact point isn't possible. That is the logical conclusion of what you've said.

I don't think I'm unreasonable in saying that that raises some major issues.

No, I even explicitly said it was fuzzy in real life. I'm not super interested in you continuing to try this weird philosophical trap thing you're doing, it's neither interesting nor relevant.

Raenir Salazar posted:

So if the argument goes, "Well we can't one day make superintelligent AI because we can't even observe it in nature", pointing out the existence of ants isn't to suggest that ants are interchangeable with superintelligence, but to point out a viable and theoretical path that a superintelligence via emergance behavior could one day be created from those principles.

This doesnt' follow, you're just wanting to extrapolate from it and so you're declaring that's how it works. Ineffectively.

Bug Squash
Mar 18, 2009

This is why I keep my ant chat to the ant thread.

Bug Squash
Mar 18, 2009

Captain Monkey posted:

No, I even explicitly said it was fuzzy in real life. I'm not super interested in you continuing to try this weird philosophical trap thing you're doing, it's neither interesting nor relevant.


This doesnt' follow, you're just wanting to extrapolate from it and so you're declaring that's how it works. Ineffectively.

The philosophical trap of demonstrating why your arguement is wrong?

Rappaport
Oct 2, 2013

Raenir Salazar posted:

Like A bunch of rocks can be Turing complete;

You do realize that this thought experiment explicitly contains a non-physical scenario?

Yngwie Mangosteen
Aug 23, 2007

Bug Squash posted:

The philosophical trap of demonstrating why your arguement is wrong?

Of you trying to extract me saying something I didn't say by repeatedly asking the same question in incrementally different ways to score a point in the argument, while ignoring my replies that say 'no it's not that.'

Bug Squash
Mar 18, 2009

Captain Monkey posted:

Of you trying to extract me saying something I didn't say by repeatedly asking the same question in incrementally different ways to score a point in the argument, while ignoring my replies that say 'no it's not that.'

I'm genuinely not trying to be a dick, and I do want to understand your point of view, but you're in a hard physics chat and throwing out "it's fuzzy" as an explanation, while I'm trying to show why it doesn't hold up.

Happy to agree to disagree, but you fundamentelly haven't engaged with any of Boltzmann's arguement.

Edit: for example, you've said "it's different", and I appreciate you believe that. You you haven't explained why it's different, and to me I can't see why it would be.

Bug Squash fucked around with this message at 20:13 on Apr 7, 2021

Yngwie Mangosteen
Aug 23, 2007

Bug Squash posted:

I'm genuinely not trying to be a dick, and I do want to understand your point of view, but you're in a hard physics chat and throwing out "it's fuzzy" as an explanation, while I'm trying to show why it doesn't hold up.

Happy to agree to disagree, but you fundamentelly haven't engaged with any of Boltzmann's arguement.

You should look up the context within which the Boltzmann Brain 'argument' was originally made. It was created, specifically, as a reducto ad absurdum. It's not a real thing to consider. It's a silly thought experiment.

That's my point, if we need to agree to disagree then that's fine.

Bug Squash
Mar 18, 2009

Captain Monkey posted:

You should look up the context within which the Boltzmann Brain 'argument' was originally made. It was created, specifically, as a reducto ad absurdum. It's not a real thing to consider. It's a silly thought experiment.

That's my point, if we need to agree to disagree then that's fine.

Silly thought experiments are very much worth considering, beams of light on a train and all that. :colbert:

Boltzmann didn't believe that they predominated the cosmos. The point of the thought experiment is for people to demonstrate which part of the arguement was wrong.

Yngwie Mangosteen
Aug 23, 2007
Boltzmann didn't believe in them, I'm not sure he ever even wrote anything about them - they weren't thought up by Boltzmann but rather by his detractors in an effort to undermine his claims about thermodynamics.


you don't even know the context of what you're discussing.

Adbot
ADBOT LOVES YOU

I AM GRANDO
Aug 20, 2006


If the exciting possibility is new physics involving new forms of matter and energy that we haven't yet detected, what's the boring possibility?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply