Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

silence_kit posted:

size scaling is massively important, and it grinding to a halt has huge implications for information technology.

Yes, it is very important, but at the same time it's also massively important what exactly you're scaling down and it may well become worthwhile to scale more exotic things down to some minimum size to add extra features even if the original core functionality doesn't improve by several orders of magnitude every decade anymore.

Also, I'd be surprised if we couldn't find some way to make fab yields higher and wafers substantially cheaper after a hard-ish size limit on transistor or memory sizes forces everyone to stay at the same size for decades.

suck my woke dick fucked around with this message at 17:55 on Sep 30, 2017

Adbot
ADBOT LOVES YOU

silence_kit
Jul 14, 2011

by the sex ghost

blowfish posted:

Also, I'd be surprised if we couldn't find some way to make fab yields higher and wafers substantially cheaper after a hard-ish size limit on transistor or memory sizes forces everyone to stay at the same size for decades.

Yes it will still drop somewhat, but not as quickly as it did with size scaling, e.g. while new automobilies when compared to old ones are reducing in price/capability, because there is no Moore's Law for automobiles the cost of a new compact car does not drop by 2x every two years.

steinrokkan
Apr 2, 2011
Probation
Can't post for 22 hours!
Soiled Meat

blowfish posted:

Also, I'd be surprised if we couldn't find some way to make fab yields higher and wafers substantially cheaper after a hard-ish size limit on transistor or memory sizes forces everyone to stay at the same size for decades.

Aren't the problems with not scaling down chips anymore more connected to dealing with energy and heat of more powerful future systems than with the cost of materials and waste? Even if making bigger chips becomes cheaper and more reliable, the economy of running them may not be that favorable.

silence_kit
Jul 14, 2011

by the sex ghost

steinrokkan posted:

Aren't the problems with not scaling down chips anymore more connected to dealing with energy and heat of more powerful future systems than with the cost of materials and waste? Even if making bigger chips becomes cheaper and more reliable, the economy of running them may not be that favorable.

I forgot about this benefit of scaling. It is even in Moore's original pop-science article: "In addition, power is needed primarily to drive the various lines and capacitances associated with the system. As long as a function is confined to a small area on a wafer, the amount of capacitance which must be driven is distinctly limited."

I think I've read that US data centers are currently around 1% of US energy consumption? Had the semiconductor industry in the 70's ignored Moore's Law (reduce transistor & wire sizes), and instead obeyed Owlofcreamcheese's law (double die areas to get greater functionality), to get the current level of information technology functionality we currently enjoy today in this hypothetical world, we would need to greatly increase worldwide energy generation capability many times over to be able to power his massive chips.

silence_kit fucked around with this message at 19:14 on Sep 30, 2017

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

silence_kit posted:

I think I've read that US data centers are currently around 1% of US energy consumption? Had the semiconductor industry in the 70's ignored Moore's Law (reduce transistor & wire sizes), and instead obeyed Owlofcreamcheese's law (double die areas to get greater functionality), to get the current level of information technology functionality we currently enjoy today in this hypothetical world, we would need to greatly increase worldwide energy generation capability many times over to be able to power his massive chips.

Moore didn't say anything about shrinking transistors, he didn't even mention transistors. transistors weren't even the primary component he was working with when he said his 'law'. Literally the only thing I was saying about the thing you keep harping on is that moore's law isn't what people pretend it is and it doesn't actually say anything about computer power or transistors or feature size or anything, it just said more components in an integrated circuit same cost doubling every two years. "moore's law" isn't a synonym for any of the things people use it as a synonym for. And intel could rightfully claim moore's law hasn't ended if they run a sale on their old CPUs or if they make a foot wide chip.

Gum
Mar 9, 2008

oho, a rapist
time to try this puppy out
What is this conversation even about? Why do you care more about whether future technology can technically coincide with something someone said decades ago than you do about what that technology would do and how it would effect society?

steinrokkan
Apr 2, 2011
Probation
Can't post for 22 hours!
Soiled Meat
I thought it was about what technology would look like if current trends proved unsustainable. The Moore law's definition is irrelevant for this purpose.

Gum
Mar 9, 2008

oho, a rapist
time to try this puppy out

steinrokkan posted:

I thought it was about what technology would look like if current trends proved unsustainable. The Moore law's definition is irrelevant for this purpose.

This is exactly what i was getting at. You guys are letting yourself get caught up in an argument that is completely irrelevant to what you're trying to discuss

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Gum posted:

What is this conversation even about? Why do you care more about whether future technology can technically coincide with something someone said decades ago than you do about what that technology would do and how it would effect society?


It's a weird bizzaro kurzweil, where moore's law is this absolutely mystic thing from which every single bit of human progress springs so if it abandons us all technology will cease to exist or something.

"CDs turned into DVDs turned into blurays and then maybe someday someone will invest billions of dollars and make ultraviolet disks, but past that it's probably impossible to get much denser by just making higher wavelength lasers! Then all human progress will end! laser wavelength has been the fundamental way we've made denser disks!" *blu-ray sales fall 50% per year because no one wants video disks at all*

Gum
Mar 9, 2008

oho, a rapist
time to try this puppy out

Owlofcreamcheese posted:

It's a weird bizzaro kurzweil, where moore's law is this absolutely mystic thing from which every single bit of human progress springs so if it abandons us all technology will cease to exist or something.

"CDs turned into DVDs turned into blurays and then maybe someday someone will invest billions of dollars and make ultraviolet disks, but past that it's probably impossible to get much denser by just making higher wavelength lasers! Then all human progress will end! laser wavelength has been the fundamental way we've made denser disks!" *blu-ray sales fall 50% per year because no one wants video disks at all*

Weirdly i was considering doing a bit on people treating science as mysticism but i was worried that would come across as singling you out

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

Moore didn't say anything about shrinking transistors, he didn't even mention transistors. transistors weren't even the primary component he was working with when he said his 'law'.

You are wrong here--almost the entire article is about digital circuits, where he points out that transistors were and still are the primary devices and play the most important role. He mentions analog circuits at the end, and correctly predicts that increased integration will be beneficial for analog, but also have some drawbacks, and won't be a slam dunk like it is for digital.

Owlofcreamcheese posted:

Literally the only thing I was saying about the thing you keep harping on is that moore's law isn't what people pretend it is and it doesn't actually say anything about computer power or transistors or feature size or anything, it just said more components in an integrated circuit same cost doubling every two years.

You do have a point that he doesn't emphasize scaling in his article. This is basically the only time you have actually rebutted any of my points in this thread instead of babbling and rapidly changing the subject and I will concede this point. Moore's original article did not emphasize scaling. He hints at the benefits of scaling a few times but actually says that in 1976, 10 years after he wrote the article, that with current patterning technology, digital circuits could expand in area to about as large as the size of a modern CPU, and could still result lower in cost/transistor.

On the other hand, had the semiconductor industry actually followed that prescription, what I'll call 'Owlofcreamcheese's Rule', to date and continued die expansion there is no way we would have had the anywhere near the electronics capability we do have today. To obtain our current level of capability in electronics in Owlofcreamcheese's world, we would need more electricity than current worldwide electricity production many times over to power the square miles of silicon circuitry wallpapering the planet, and we would need the world's current industrial output and wealth many times over to be able produce the circuits.

Gum posted:

This is exactly what i was getting at. You guys are letting yourself get caught up in an argument that is completely irrelevant to what you're trying to discuss

I'm just trying to make the argument that most of the improved capability, improved cost, and improved energy efficiency of electronics over the past 50 years is owed to transistor & wire size scaling, and that the end of scaling is basically the end of rapid improvement and growth that we've grown accustomed to in digital systems. Obviously it will still continue and still be a valuable & profitable industry, but it will turn into something more like the chemical industry.

Edit:

Owlofcreamcheese posted:

It's a weird bizzaro kurzweil, where moore's law is this absolutely mystic thing from which every single bit of human progress springs so if it abandons us all technology will cease to exist or something.

"CDs turned into DVDs turned into blurays and then maybe someday someone will invest billions of dollars and make ultraviolet disks, but past that it's probably impossible to get much denser by just making higher wavelength lasers! Then all human progress will end! laser wavelength has been the fundamental way we've made denser disks!" *blu-ray sales fall 50% per year because no one wants video disks at all*

Lol, clearly you are projecting this onto me.

silence_kit fucked around with this message at 22:00 on Sep 30, 2017

crazypenguin
Mar 9, 2005
nothing witty here, move along
I, too, am not even sure what the root of this argument is about, but this discussion about "Moore's law" is a bit aggravating to read.

"The end of Moore's law" was a respectable position... 4 or so years ago. It's slightly out of date. The panic got its start because of what's called "Dennard scaling". You used to be able to take the same transistor design, make it smaller, and you could pack in more transistors, at faster speeds, using less power, all for less money.

Dennard scaling was the one-trick-pony of the entire semiconductor industry for quite awhile. It's apparent end in roughly 2006 with about the 65nm process, and the recognition of this change a few years later, lead to a lot of panic-running-around-screaming-while-setting-yourself-on-fire. For awhile, it looked like the "end of Moore's law."

The one-trick-pony is dead, but all that's actually changed is that we actually have to find innovative transistor designs, instead of just "the old one but smaller, done!" We invented FinFETs and optimization of this kind of transistor has taken us to 14nm, and it seems to a 10nm process too. After that, there's a laundry list of more transistor designs in the works, we'll see what wins.

Since the end of Dennard scaling, we've still made transistors smaller and cheaper and lower power. Just about the only thing that hasn't improved as much as it used to is clock speeds. Oh well. You can call that "the end of Moore's law" if you want, but to say that scaling has "ground to a halt" is absolutely wrong.

If you're interested in this stuff, I recommend following a trade outlet like semiengineering. The industry seems to think it has a pretty good idea of what it's doing for the next 15-20 years, and that the economics will work out to at least 3nm processes.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

silence_kit posted:

On the other hand, had the semiconductor industry actually followed that prescription, what I'll call 'Owlofcreamcheese's Rule', to date and continued die expansion there is no way we would have had the anywhere near the electronics capability we do have today. To obtain our current level of capability in electronics in Owlofcreamcheese's world, we would need more electricity than current worldwide electricity production many times over to power the square miles of silicon circuitry wallpapering the planet, and we would need the world's current industrial output and wealth many times over to be able produce the circuits.

Literally the only point I was making was that moore's law isn't what people (you apparently) think it is and "moore's law" literally only promised you more components for a price, doubling every 2 years so in 2029 if intel just throws enough capacitors in a little bag then poops in the bag then prices it at 145 dollars then moore's law has continued.

Like you have gone on and on and on about this, I am not claiming what you think I'm claiming.

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

I am not claiming what you think I'm claiming.

Ok, fine, I re-read your original posts on the subject, and your claim wasn't as strong as what I thought you were claiming.

I still think your claim that Moore's Law could continue by die size increases for a many generations after the end of transistor size scaling is non-sensical and wrong except for high margin digital IC products like GPUs, desktop, laptop, & server CPUs. There are a lot of less profitable IC products which would not improve in sales price/function by increasing die area for many generations in a post size-scaling world.

Edit: wait, Moore's Law is not talking about sales price, it is talking about manufacturing unit cost. Read the original article. I clearly am in the right then. Doubling die areas at best maintains current unit cost, and more likely raises unit cost. I've been on tilt ever since you incorrectly lawyered me regarding the historical origins of the blue LED. Finally, I've out-lawyered you.

silence_kit fucked around with this message at 22:42 on Sep 30, 2017

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

silence_kit posted:

I still think your claim that Moore's Law could continue by die size increases for a many generations after the end of transistor size scaling is non-sensical and wrong except for high margin digital IC products like GPUs, desktop, laptop, & server CPUs. There are a lot of less profitable IC products which would not improve in sales price/function by increasing die area for many generations. in a post size-scaling world.

Cool, do you need to argue about it for 8 more pages?

Literally the only claim I'm making is that "moore's law" is a thing that people know what they mean when they say it but is also a thing where intel could just release next year's chip as a chip with a bunch of smiley faces on it drawn in resistors and as long as it is twice as many components and costs less they can say "our chips have continues to follow moore's law, go gently caress yourself". Because it doesn't say anything about the quality of the chip or how well it runs call of duty or anything about anything and if intel just wants to have a checkbox on their yearly chip release "does this follow moore's law check one Yes [ ] No [ ]" there is ways to do that And then you can scream "this isn't what I meant by moore's law! I was using it as short hand for some generalized "better chip", I didn't mean it literally!"

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

Cool, do you need to argue about it for 8 more pages?

Literally the only claim I'm making is that "moore's law" is a thing that people know what they mean when they say it but is also a thing where intel could just release next year's chip as a chip with a bunch of smiley faces on it drawn in resistors and as long as it is twice as many components and costs less they can say "our chips have continues to follow moore's law, go gently caress yourself". Because it doesn't say anything about the quality of the chip or how well it runs call of duty or anything about anything and if intel just wants to have a checkbox on their yearly chip release "does this follow moore's law check one Yes [ ] No [ ]" there is ways to do that And then you can scream "this isn't what I meant by moore's law! I was using it as short hand for some generalized "better chip", I didn't mean it literally!"

I just realized you are wrong about this. Moore's original article is referring to manufacturing unit cost, not sales price. I've out-'well, technically-ed' the guy who sometimes misses the forest for the trees regarding technology guy.

Also, I've never really conflated Moore's Law in the way others do that you so desperately want to refute in this thread.

This was an interesting derail, ha well at least for me--I never have carefully read Moore's original popular science article, and while I knew that it was primarily focused on cost/function, I didn't realize that he really didn't emphasize scaling as the way to achieve that end, and was mostly talking about how improved manufacturing know-how would improve IC cost/device or cost/function.

silence_kit fucked around with this message at 22:58 on Sep 30, 2017

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

crazypenguin posted:

Just about the only thing that hasn't improved as much as it used to is clock speeds.

Clockspeed has risen pretty steadily, the pentium 4 era just got really nutty with huge clock speeds because "ghz" was the only metric people bought chips by.

Like if you look at the history of clock speed it gets up to around 1.3 then pentium 4s happen and they zoom right up to almost 4ghz, then AMD does the "megahertz myth" ad campaign and then 2 core chips come out and the clock speed is right back down to 1.6 and then continues growing at about the same old speed it was and from 2011 to 2013 intel's chips grew from 3.3 to 4.3 except for some crazy thing AMD made in 2014 that was 5ghz and everyone hated.

Like it was partly stuff slowed down when we went from single core to multicore and things had to catch back up, but part of it was that ghz was the number they put on the box in the pentium days so they just made that one number pointlessly huge and once that strategy broke it went back to closer to the growth curve it had been on.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

silence_kit posted:

I just realized you are wrong about this. Moore's original article is referring to manufacturing unit cost, not sales price.

I think threads like this tend to treat the products companies intel sells as being the perfect ideal good faith best effort in every way. So that the sale price and manufacture price are tightly coupled and close to each other. Which is probably not the actual case for a company that got to be a near monopoly for an entire market that is vital to the functioning of all modern human civilization. But that stuff always sounds like conspiracy theory. (Pretty much everything about the way intel sells cpus is exploitative, including moore's law at times)

crazypenguin
Mar 9, 2005
nothing witty here, move along
You're talking about CPU, I'm talking about fundamentals about the transistor switching speeds.

You can still kinda see the difference when looking at CPUs, but you're right you have to be wary of the companies making bad decisions. Roughly the right inflection point to pick is the introduction of the i3/5/7 branding, which is post-pentium 4 era mistakes, and they had base clocks of almost 3 GHz.

So in the last 7 years, we've gone up 30% in clock speeds, while in the previous 20 years we went up 12000%. That's what Dennard scaling was doing for us.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

crazypenguin posted:

You're talking about CPU, I'm talking about fundamentals about the transistor switching speeds.

Oh well, that keeps going up. You are talking about consumer product CPU clock speed.


https://www.extremetech.com/extreme/193343-darpa-creates-first-1thz-computer-chip-earns-guinness-world-record

Morbus
May 18, 2004

silence_kit posted:

...
Could you write about this? How could you do nano-patterning at low cost on the hard drive platters? I'm assuming that the lithography used in integrated circuits is ruled out for being too slow and too expensive to be used over the large area of a hard drive platter. But you don't need arbitrary shapes like in integrated circuits, just a grid shape (right?), so maybe there is another technique that could work. And of course there is bottom-up patterning, but AFAIK, those strategies traditionally haven't been consistent enough to be relied upon real electronics products where basically perfect accuracy and fidelity in shapes is demanded.

Yep pretty much. Traditional optial lithography is just way too expensive, and also doesn't scale down well to the planar dimensions that would be needed to provide enough long-term potential benefit to justify the cost. You aren't quite making a grid--it needs to have a circumferential line of dots at each radius. There are also servo patterns hard-written in. But because of the inherent symmetry, certain tricks can be employed to decrease your feature sizes more easily than in a regular integrated circuit (the same is basically true for e.g. DRAM or flash vs. a CPU).

Lots of strategies were looked at including things like holography but in the end what we ended up with was:

1. Make a patterened quartz hardmask using direct write electron beam lithography. Similar to what is used to make photomasks for chips
2. Coat the disk with a photoresist
3. Smoosh the hardmask into the disk and flash UV light through it to cure the resist. Special chemicals coat the mask to prevent sticking
4. Lift off and then plasma etch

So direct write electron beam lithography for a mask followed by photo-nanoimprint lithography. This could get bit cells down to the 10's of nanometers. To get lower than that a variation was used where you use the above process to make a superpattern with say 100nm pitch, and then use a bottom-up self assembly process involving block co-polymers to divide that into a grid of say 10nm. That way, you avoid the problem you mention of self assembled process not having sufficient long range order o consistency, since you only rely on it as a pitch multiplier step over ~100nm distances. You also avoid the problem of needing your self-assembly process to somehow have circular symmetry since the difference between a line and an arc on a 95mm disk is negligible over 100nm.

There were a lot of reasons these efforts were cancelled. In my opinion there were three main ones:

1. Apart from cost and throughput issues, the nanoimprint process simply was too far behind where it needed to be in terms of defects. It was clear that a huge amount of work needed to be done until there was a good enough lithography process. Sooner or later the semiconductor industry needs to solve these same problems, so maybe just let them sort it out and revisit things in 10 years. In the mean time we can focus on HAMR. . The "ultimate" technology would very likely have to be bit patterned HAMR anyway, so it makes sense to focus on HAMR and then revisit bit patterning once the enabling technologies have matured and pitches in the 10nm range are less exotic.

2. Bit-patterning looked like it would need HAMR, more than HAMR would need bit-patterning. The best tech demos had something like 1 TB/sq. in densities. Which was great in 2010 but is what conventional PMR was expected to achieve eventually anyway (and it did). To get much better than that, and certainly to provide a path to the ~10 TB/sq. in. that was required for the investment to pay off, more exotic processes like the block co-polymer directed self-assembly were required, which really just multiplies any problems you had in the initial lithography. Additionally, as you approach the 10 TB/sq. in limit, your bit cells run into superparamagnetic and other issues that it seemed very likely would require HAMR to solve. Incidentally, bit patterning and HAMR have a lot of synergy, with each approach balancing many of the shortcomings of the other. Since the "ultimate" technology has to be both bit-patterned and HAMR, it makes more sense to focus on HAMR and wait for lithography technologies to mature than try to develop and perfect our own lithography technology from scratch.

3. There were serious issues with planarization. As I mentioned earlier the fly height of the read/write head is like 5nm, which is insane. The resolution and sensitivity of the readback sensor are extremely sensitive functions of fly height, which is why so much effort is made to keep it low. Flying this close requires an extremely smooth surface. The RMS roughness of conventional disks is around a few angstroms. The individual bits in a patterned disk need to be at least ~5nm thick, so going from a 5nm feature height down to a 5 angstrom surface smoothness presents some challenges. You need a much more aggressive etch process, but to prevent this etch from nuking your magnetic layers you need a relatively thick etch stop layer. Any residual etch stop layer is going to add to your effective disk-head separation just as an increased fly height would, so that's no good. In the end, because disk-head spacing has such a huge effect on your achievable capacity, there is very little room for making it worse. You need some way to aggressively etch the surface to < 1nm levels of roughness while at most adding only a few angstroms of residual thickness on top of your bits. This problem could not be well solved, which meant the bit cell advantage that patterned media would need to show any real net gain was pushed further and further out, well into the point where things like HAMR and directed self-assembly might be required.

For example, to be cost effective, you need, say, at least a 50% gain over the best PMR (realistically you need probably way more, this is a best-case number). So you need 1.5 Tb/sq. in. patterned media product, which if we do everything perfectly requires around 1.5 Tdots/sq. in., or a ~20nm bit size Because of planarization issues, your 5nm fly height becomes 8nm. Achievable capacity drops at roughly 1.5% per angstrom of spacing, so congrats you just cut your capacity in half. So now you need a 3 Tb/sq. in product for a 50% gain, which requires a ~14.5nm grain size. If your bits are only 5nm high, these will be thermally unstable, which means you need HAMR and all the challenges that go with it. You can avoid this if you make your bits 10nm thick, but now your planarization issues are even worse which means your spacing is worse which means you need to make even smaller bits etc. etc.

A final point is that regardless of how optimistic or pessimistic you wanted to be concerning the long term fate of the HDD industry, flash was going to be growing a lot, and since the main HDD companies are huge and vertically integrated hardware manufacturers, it was clear that they would be able to easily enter the flash consumer and enterprise storage market simply by acquiring or building flash manufacturing capability. In terms of firmware, system integration, etc., they already had all the needed know-how. So especially circa 2010-2015, massive investments in speculative HDD technology had to compete with the near term interest of attaining flash fabs. Since doing this would require around as much money as the entire Ford-class supercarrier program, having parallel moonshot HDD programs for technologies 10+ years away is hard to justify.

So in the end it the industry as a whole all sort of collectively landed on focusing efforts on HAMR. Some companies came to that conclusion earlier than others, and their HAMR programs are more advanced as a result.

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

(Pretty much everything about the way intel sells cpus is exploitative, including moore's law at times)

I think if you talk about their consumer-oriented products, then I think they really aren't that exploitative when compared to many other companies or industries. They continue to offer products with better capability every year, although in recent history the improvements have been modest, and prices on their products are decreasing relative to inflation. Computers are not really that expensive, unless you have a serious gadget fetish and buy new ones often, and are not that necessary a good in the same way food, medicine, or automobiles are, so the fact that they sell many of their products well above the marginal unit cost isn't really that disturbing to me. I think if you think Intel is exploitative you probably also have pretty huge issues with the software industry and the publishing and entertainment industries or you have major issues with capitalism in general.

Morbus posted:

1. Apart from cost and throughput issues, the nanoimprint process simply was too far behind where it needed to be in terms of defects. It was clear that a huge amount of work needed to be done until there was a good enough lithography process. Sooner or later the semiconductor industry needs to solve these same problems, so maybe just let them sort it out and revisit things in 10 years. In the mean time we can focus on HAMR. . The "ultimate" technology would very likely have to be bit patterned HAMR anyway, so it makes sense to focus on HAMR and then revisit bit patterning once the enabling technologies have matured and pitches in the 10nm range are less exotic.

I guess that that isn't too surprising to me. The following isn't really proof of anything, wasn't anywhere close to state-of-the-art, and was done by hand in a wet lab and not by a massive piece of capital equipment in a clean room, but I have performed imprint lithography with a silicone rubber stamp, and found it to be way more loaded with defects when compared to optical contact lithography. The minimum feature size is better though with the imprint lithography when compared to optical contact lithography.

I'm a little shocked that it was seriously considered, but I guess maybe they thought the people at hard drive companies kind of needing to be great mechanical (would that be the right term?) engineers to be able to make their products could be pretty well suited to improving imprint lithography.

Morbus posted:

A final point is that regardless of how optimistic or pessimistic you wanted to be concerning the long term fate of the HDD industry, flash was going to be growing a lot, and since the main HDD companies are huge and vertically integrated hardware manufacturers, it was clear that they would be able to easily enter the flash consumer and enterprise storage market simply by acquiring or building flash manufacturing capability. In terms of firmware, system integration, etc., they already had all the needed know-how. So especially circa 2010-2015, massive investments in speculative HDD technology had to compete with the near term interest of attaining flash fabs. Since doing this would require around as much money as the entire Ford-class supercarrier program, having parallel moonshot HDD programs for technologies 10+ years away is hard to justify.

The following is kind of a change of topic to the future prospects of magnetic storage. I have seen a slide a couple of years ago which claimed that the near future of hard disks is pretty safe--if you look at the incredible demand for digital storage and compare it to the output of all of the flash memory factories in the world, the total flash memory production would a pretty small fraction of the total demand, and it'd be pretty difficult to ramp up production of flash that quickly to be able to meet the rising demand So, if for no other reason, hard drives would still be the dominant storage technology for the near future.

Do you think that is true? Do you have an opinion on the future prospects of magnetic storage technology? Or is it pretty uncertain, and you honestly have no clue about how it will play out?

silence_kit fucked around with this message at 02:35 on Oct 1, 2017

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

silence_kit posted:

I think if you talk about their consumer-oriented products, then I think they really aren't that exploitative when compared to many other companies or industries. They continue to offer products with better capability every year, although in recent history the improvements have been modest, and prices on their products are decreasing relative to inflation. Computers are not really that expensive, unless you have a serious gadget fetish and buy new ones often, and are not that necessary a good in the same way food, medicine, or automobiles are, so the fact that they sell many of their products well above the marginal unit cost isn't really that disturbing to me. I think if you think Intel is exploitative you probably also have pretty huge issues with the software industry and the publishing and entertainment industries or you have major issues with capitalism in general.

Yeah, capitalism in general is pretty iffy.

But I specifically mean Intel creates and sells products in a way that makes them the most money (duh), but people often talk about the history of technology through the lens of the things they released as if everything was always the best possible effort to deliver the best possible product in every way at all times. Which in some cases was probably close to true and at other times was probably quite far from that.

Like when GHz was the number on the box that people cared about and the clock speed of chips started going up and up and up that was probably more of a decision someone in the marketing team made rather than the wise scientist deciding what was the optimal path forward for computation. Or like the way moore's law looks like so perfectly smooth in CPUs and that is as much a factor that intel decided it wanted it to be as much as raw science happening to always make it so perfect.

Morbus
May 18, 2004

Owlofcreamcheese posted:

...
Or like the way moore's law looks like so perfectly smooth in CPUs and that is as much a factor that intel decided it wanted it to be as much as raw science happening to always make it so perfect.

Yeah, basically. This is a consequence of program management requirements as much as anything else. Semiconductor manufacturing is insanely expensive and complicated, so from a business standpoint you need to plan your tech roadmap well in advance and set goals you are reasonably confident that you can make. This way you can focus your efforts into one general scheme you know will get you to a product vs. spending a hundred billion dollars on something that doesn't work and going bankrupt. This has become more and more important as fab costs have gone up exponentially.

Programs are usually defined before any of the underlying tech is there. People just say, well, the way things look now, we are at around a 10% growth rate, and marketing people tell us we need a 40% improvement over the current gen to have a new product. Based on fundamental physics, scaling, and whatever collection of tricks we have been working on, we think that should be possible, so we're just gonna define a next-gen product that is 40% better than what we're making now, and plan a timeframe towards that based on this 10% growth rate. Then you look at what sort of broad technical problems need to be solved to do that, and everything just coalesces around the initial program definition. This is a huge oversimplification but it gives you an idea of why product releases can follow such a smooth improvement curve.

Arglebargle III
Feb 21, 2006

Gum posted:

Weirdly i was considering doing a bit on people treating science as mysticism but i was worried that would come across as singling you out

If you can't see the mystical in probing the nature of time you're no fun.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Morbus posted:

Yeah, basically. This is a consequence of program management requirements as much as anything else. Semiconductor manufacturing is insanely expensive and complicated, so from a business standpoint you need to plan your tech roadmap well in advance and set goals you are reasonably confident that you can make. This way you can focus your efforts into one general scheme you know will get you to a product vs. spending a hundred billion dollars on something that doesn't work and going bankrupt. This has become more and more important as fab costs have gone up exponentially.

Programs are usually defined before any of the underlying tech is there. People just say, well, the way things look now, we are at around a 10% growth rate, and marketing people tell us we need a 40% improvement over the current gen to have a new product. Based on fundamental physics, scaling, and whatever collection of tricks we have been working on, we think that should be possible, so we're just gonna define a next-gen product that is 40% better than what we're making now, and plan a timeframe towards that based on this 10% growth rate. Then you look at what sort of broad technical problems need to be solved to do that, and everything just coalesces around the initial program definition. This is a huge oversimplification but it gives you an idea of why product releases can follow such a smooth improvement curve.

And it's definitely not evil for a business to have a business model. But intel also definitely utilizes it's market dominance at times in ways that don't give the consumer the best product. Which is borne out by them sometimes getting in trouble with the law for anticompetitive practices.

Like it's not something to go wild with, intel wasn't holding back i9s in 1972 or anything, But sometimes people look at release schedules and say "this is the progress science made" when if you look into it more there is also an aspect of business too. Evil or not. Like real science let pentium 4s have high clock speeds and that was part of it, but it was mostly so high because the marketing team decided that was the number to print in big number on the box because it was the thing they most were ahead of their rival in. So like a conversation of why clock speed increase went up then down is a conversation about the history of scientific development to a degree, but it's also just "the product lines did a thing".

Morbus
May 18, 2004

silence_kit posted:

...
The following is kind of a change of topic to the future prospects of magnetic storage. I have seen a slide a couple of years ago which claimed that the near future of hard disks is pretty safe--if you look at the incredible demand for digital storage and compare it to the output of all of the flash memory factories in the world, the total flash memory production would a pretty small fraction of the total demand, and it'd be pretty difficult to ramp up production of flash that quickly to be able to meet the rising demand So, if for no other reason, hard drives would still be the dominant storage technology for the near future.

Do you think that is true? Do you have an opinion on the future prospects of magnetic storage technology? Or is it pretty uncertain, and you honestly have no clue about how it will play out?


It's kind of true. For example, in 2016, a total of ~600 exabytes of HDDs storage were sold. For all flash memory combined (not just SSDs), it was around 50 exabytes. That's a > 10:1 ratio. Due to yearly variations and business cycle reasons neither of these numbers is an exact representation of the total production capacity of either HDDs or flash, so some years or quarters it has been as close a 5:1 or 6:1. But regardless, to displace HDDs either A.) the global demand for storage needs to decline sharply, for some reason or B.) you need to increase global flash production capacity by 500-1000%. A.) seems unlikely, so we can focus on the supply economics of increasing flash production. The largest NAND flash superfabs in the world cost $10-15 billion and can produce around 5-10 EB of flash per year. So if you need to grow capacity by at least several hundred EB, you are talking about needing roughly in the range of 40-100 new fabs, for a cost of ~400 billion to a trillion dollars just in up front capital costs, to say nothing about OpEx, etc. In comparison, the entire HDD industry that you are trying to displace has gross revenue of maybe 20-25 billion. Even if you assume the market is willing to triple what it pays for storage, with no demand elasticity, and that you can maintain 60% margins, that's just ~50 billion of profit. It's gonna take you at least 10-20 years just to break even for the cost of the fabs, which is not enough time to recover your investment over their useful lifetime, and this is in a framework making very generous assumptions.

I think this argument is wrong. For a few reasons

1. The basic supply economic argument is overly simple and ignores why flash is growing/profitable in the first place. Not all flash growth is displacement. Some HDD revenue is more profitable to displace than others

Rremember that part about "unless global storage demand drops for some reason". In certain market sectors, like laptops, and to a lesser but increasing extent desktop PCs, this very much has been the case. The storage requirements of a basic PC have not really increased much over the years, and people are buying fewer PCs than they used to. In a lot of cases, for things like laptops, people are willing to have less storage than they had when they were using HDDs, partially because they don't need the extra storage anyway, partially because the performance increase is more important. In these cases, flash doesn't have to compete with HDD on a per-byte basis, which breaks the above argument: If the desktop/laptop PC storage market is X petabytes today, you may nonetheless be able to totally displace that market with less than X petabytes eventually.

However, global storage demand is nonetheless increasing rapidly. A lot of this is due to more and more data being stored on the internet. Streaming video services, like Netflix or the NSA, are huge drivers of growth. So are the datacenters operated by the likes of Google and Amazon. Apart from that a lot of the data that may have previously been stored on a PC is now being stored at the enterprise level on the internet. Personal cloud storage is a small part of this, but really it has more to do with the fact that most of the software and data people interact with on a daily basis is over the internet. In any case, as of today ~60% of the demand is for capacity enterprise storage and that is growing.

In comparison, the huge increase in demand for flash and it's profitability is being largely driven by new applications enabled by that technology, which never really used disk drives to begin with, like smartphones, digital cameras, tablets... For a lot of these applications, there is (at present, anyway), no real competing technology to flash, so it can be sold at whatever price the market will bear--it doesn't have to disrupt any existing market by raising prices of existing commodities. Additionally, these are mostly applications with either fixed or slowly growing storage requirements, which again means that flash doesn't need to worry so much about cost per byte, since the demand is on a per unit basis (I must have the new iPhone!) rather than a per-byte basis (I want to spend money specifically to quadruple my smartphone's memory!). Apple is willing and able to spend A LOT more for a measly 64GB chip of flash than an enterprise data center is willing to spend for 64GB of bulk storage

Likewise the SSD sector is being exclusively driven by either new applications (I can use SSDs to do something I couldn't do before), or by cannibalizing HDD applications where raw storage capacity is not needed (high performance enterprise, PCs).

For now, flash is growing new markets and displacing HDDs in sectors where performance > capacity. Most of the demand and demand growth for storage is in capacity sensitive applications (for now), so flash eventually needs to compete there (probably). Once the low hanging fruit is plucked, further penetration will require flash to compete on a per-byte basis, and at that point the basic supply economic argument outlined in the first paragraph applies. Then, it becomes dubious whether flash can displace HDD while still being profitable. So, just from a purely economic point of view, the real question is how far can flash penetrate until its hits the capacity-sensitive wall where fab costs are simply too expensive compared to what you can sell the chips for? Since right now, most of the global storage demand is extremely capacity and price sensitive, the answer would seem to be "not that far". But this economic argument is technologically static, and there are technical aspects that also need to be considered...

2. The delineation between capacity-sensitive and performance-sensitive storage is unclear, and may change over time

There are obviously certain kinds of storage, where frequent access and computation on the data is not necessary. And in these applications SSDs absolutely need to compete with HDDs on a cost/byte basis. But is this a static thing? I mean, today nobody at Facebook wants to do intensive computation on 800 trillion pictures of people's cats, but if they could, and found a reason that they should, how much would they be willing to pay? gently caress if I know. A lot of people make the argument that because of "big data something something", it is going to be increasingly worthwhile for companies to pay a premium for faster SSDs even on data that mostly just sits around, because by deep learning neural networks on the noosphere blah blah blah somehow money. Obviously I am not terribly convinced by these kinds of arguments, but it is something to consider. Personally, I think that even if people identify previously archival datasets that they want to do intensive computation on, in a holistic way that can't be done in RAM, and are willing to pay big bucks to do so, they aren't going to do that on ALL their data ALL the time. They can just load that data off disk into their Big Data 9000 SSD array, teach IBM Watson how to figure out if someone is going to join ISIS from their cat pictures, and then move on to the next thing. To me this falls more into the "new application" sphere than the "displace HDD" sphere. Some big data evangelicals take it for granted that of COURSE you would want to process everything in every way at every second but I dunno. Even if you tried to do that your datacenter would loving melt. But still, it does seem plausible that certain middle-ground cases that are presently considered mostly capacity sensitive but sort of performance sensitive may shift over time in a way that allows creeping SSD penetration into previously "safe", "archival" spaces, since the market would be willing to pay more for new capabiliies. And this is hard to predict.

Additionally, a particular market may be capacity-sensitive early in it's life, only to shift to being performance-sensitive once storage capacity gets "good enough". Music players were the first example of this. Initially, being able to put 30,000 vs 3000 songs on an iPod was important, there was a real demand for more storage. But nobody gives a poo poo if they can put 3 million songs on a music player, and even very slowly improving SSD technology will eventually reach cost parity (or at least become cheap enough) against a fixed target. The situation with laptops and desktop PCs is similar. Even today, with things like streaming video, there may eventually be "enough" storage, at which point SSD may take over provided it can keep improving. There will always be some applications where there is never enough storage, as well as applications that don't yet exist but are just waiting for storage technologies to improve. But it's difficult to predict even today where the threshold is for capacity vs. performance sensitive applications, and it's near impossible to determine how it will be in the future.

3. Both of the above economic arguments ignore the comparative technological trajectories of SSD vs HDD. Also, technological progress drives demand.

The original basic supply economic argument looks at how many exabytes of capacity are needed, how many exabytes a fab makes, and goes from there. But of course, over time, both of these numbers are expected to grow and not necessarily proportionately. For example, what if global storage demand in 10 years increases by 100%, but the next generation of SSD fabs only improve by 40%, in terms how many exabytes per year they can produce. Obviously, the the situation would be heavily modified in favor of HDD. Alternatively, what if storage demand only increases by 50%, while SSD production increases by 80%. In particular, if SSD technology advances rapidly enough while HDD stagnates, SSD could achieve cost parity. This might happen if for example NAND flash can continue scaling faster or longer than HDD. Or, if improvements in semiconductor technology hit a wall or slow down sooner or faster than HDDs, it the cost/byte disparity could grow to the point where SSD never is able to penetrate predominantly capacity-sensitive markets, and may not even be able to satisfy demand in performance-sensitive applications that are stuck using HDDs..

Additionally, the demand for global storage is, at least for now, coupled to improvements in HDD technology. If, for example, someone pulls some HAMR breakthrough out of their rear end and triples the capacity of HDDs overnight (like what happened with giant-magnetoresistance in the 90's, basically), that could, in theory, cause either the same global storage demand for 1/3 the price, or triple the amount of storage for the same price. Historically, it is always much closer to the latter, since increases in storage capacity enable new applications and new markets. The result is that any technology which is able to scale substantially better than it's competitors will eventually dominate, since it will create the bulk of demand that only it can fill. This applies both for sudden huge breakthroughs, as well as for smaller differences in growth compounded over a longer time.

Both HDD and SSD will continue scaling exponentially, just at unclear rates. This introduces a lot of uncertainty since 12% growth vs 15% growth over, say, 10 years, translates into a 30% difference. A 30% gap may make the difference between a particular market being profitable or unprofitable for flash; it may make the difference between slowly but surely displacing HDD vs. never being able to catch up. As with everything involving compounding growth, being able to accurately predict growth rates over long periods of time is really the most important thing, and it's a lot harder than mumbling about supply economics or capacity vs. performance sensitivity under static market and technology conditions.

The upshot of all this is that nobody really knows, and it's very difficult to say. The endgame is hugely dependent on the comparative technological improvement of HDD vs. SSD, which is not widely appreciated, and completely ignored by most arguments. In the near-term, the basic supply economic argument against SSDs sort of applies, but the exact extent of SSD penetration depends on how customers prioritize capacity over performance which is a bit fuzzy and subject to change over time. The fact that the continued scaling of conventional micro and nanotechnologies like flash and HDD can no longer be well predicted really complicates any prediction. And as a final wrench to throw in the whole mess, the largest HDD manufacturer is also the 2nd largest SSD manufacturer, and it is likely in the foreseeable future that all (both...) HDD companies will also be SSD companies. This opens up a lot of weird and arguably anti-competitive incentives that complicate any straightforward economic argument.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Morbus posted:

The largest NAND flash superfabs in the world cost $10-15 billion and can produce around 5-10 EB of flash per year. So if you need to grow capacity by at least several hundred EB, you are talking about needing roughly in the range of 40-100 new fabs, for a cost of ~400 billion to a trillion dollars just in up front capital costs, to say nothing about OpEx, etc.

I get that you are saying "they would need to build many new factories to match the output of the HDD industry", since there is probably not enough ssds manufactured in the entire world if everyone wanted one, but doing the math by the number of bytes a factory outputs seems really really silly.

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

I get that you are saying "they would need to build many new factories to match the output of the HDD industry", since there is probably not enough ssds manufactured in the entire world if everyone wanted one, but doing the math by the number of bytes a factory outputs seems really really silly.

Ahhh, he addresses why that argument is a little simplistic in many ways in his post! Read his post!

As Morbus says in his post, the argument is not totally wrong and is not absurd though--a lot of the market for digital storage is not in iPhones or laptop computers, but for enterprise where in many applications total capacity & cost/GB is still very important when compared to speed, where people are less willing to pay premiums for higher read/write speeds and/or form factor, and where hard drives are still king. That fact may change though, and it is heavily dependent how exactly the total capacity, cost/GB, & speed differences between the two technologies will continue to change in the future, and what the technical requirements for new applications will be. There also is the fact that since many of the hard drive companies own flash memory companies, they'll probably orient the two sets of products to be complementary technologies.

silence_kit fucked around with this message at 16:33 on Oct 1, 2017

Cockmaster
Feb 24, 2002

silence_kit posted:

Ahhh, he addresses why that argument is a little simplistic in many ways in his post! Read his post!

As Morbus says in his post, the argument is not totally wrong and is not absurd though--a lot of the market for digital storage is not in iPhones or laptop computers, but for enterprise where in many applications total capacity & cost/GB is still very important when compared to speed, where people are less willing to pay premiums for higher read/write speeds and/or form factor, and where hard drives are still king. That fact may change though, and it is heavily dependent how exactly the total capacity, cost/GB, & speed differences between the two technologies will continue to change in the future. There also is the fact that since many of the hard drive companies own flash memory companies, they'll probably orient the two sets of products to be complementary technologies.

There's also the fact that SSDs consume way less power than mechanical drives. For enterprise applications involving many many drives (considering both the power going into the drives and the power used to keep everything cool), it's possible that this might occasionally justify the higher cost/GB.

I could've sworn I had seen an article about some major internet-related business switching to SSDs at least partially for that purpose, but I have no idea who it was.

Xae
Jan 19, 2005

Cockmaster posted:

There's also the fact that SSDs consume way less power than mechanical drives. For enterprise applications involving many many drives (considering both the power going into the drives and the power used to keep everything cool), it's possible that this might occasionally justify the higher cost/GB.

I could've sworn I had seen an article about some major internet-related business switching to SSDs at least partially for that purpose, but I have no idea who it was.

The advantage of flash is the number of operations a second.

Mechanical drives struggle to hit triple digits. Flash drives hit 5 digits with ease.

Reduced power consumption is a nice bonus, but with mechanical drives costing several times less you'll never make the money back in power savings.

Morbus
May 18, 2004

Cockmaster posted:

There's also the fact that SSDs consume way less power than mechanical drives. For enterprise applications involving many many drives (considering both the power going into the drives and the power used to keep everything cool), it's possible that this might occasionally justify the higher cost/GB.

I could've sworn I had seen an article about some major internet-related business switching to SSDs at least partially for that purpose, but I have no idea who it was.

Yeah, this is actually more complicated than it seems. In general, SSDs will have much better power consumption per I/O operation, while HDDs will have better power consumption per gigabyte of static storage. So, compared to now-obsolete 15k RPM performance enterprise HDDs that were used in high I/O applications, SSDs offered a substantial power savings. But for capacity enterprise, like how Hulu needs to store the complete season history of The Simpsons even though nobody ever watches most of it, it takes less power to run 1 HDD vs the equivalent sized SSD, especially if you want to do so in a comparable volume. This is especially true for "cold" storage applications where you can slow down or even stop spinning the platters until needed.

As others have pointed out, the main advantage of SSD in enterprise storage is the overwhelmingly better read/write ops per second.

steinrokkan
Apr 2, 2011
Probation
Can't post for 22 hours!
Soiled Meat

Xae posted:

The advantage of flash is the number of operations a second.

Mechanical drives struggle to hit triple digits. Flash drives hit 5 digits with ease.

Reduced power consumption is a nice bonus, but with mechanical drives costing several times less you'll never make the money back in power savings.

Here it says that the capital cost of installing a $1500 server in a data center was $8000 due to the power management and cooling infrastructure. Seems that sufficiently cheap SSDs could partially pay for themselves if they could contribute to shaving off some of this overhead.
https://arstechnica.com/information-technology/2009/10/datacenter-energy-costs-outpacing-hardware-prices/

steinrokkan fucked around with this message at 07:22 on Oct 2, 2017

Xarn
Jun 26, 2015
Probation
Can't post for 23 hours!

steinrokkan posted:

Here it says that the capital cost of installing a $1500 server in a data center was $8000 due to the power management and cooling infrastructure. Seems that sufficiently cheap SSDs could partially pay for themselves if they could contribute to shaving off some of this overhead.
https://arstechnica.com/information-technology/2009/10/datacenter-energy-costs-outpacing-hardware-prices/

HDD power consumption is counted in single digit watts, so is SSD (although SSDs can drop a bit under 1W). There is also an open question, whether it saving holds for coldish storage given desired capacity (it likely doesn't).

In comparison, low power Xeon (one that sacrifices some performance for reasonable power consumption, basically model for web servers in data centres) idles at 30 and nobody likes their CPUs idle. :v:

mobby_6kl
Aug 9, 2009

by Fluffdaddy


SSDs do use a bit less power, but especially under load, the difference isn't that big. I'm sure there are use cases where having a bunch of SSDs in an NAS makes sense, but in practice the cloud storage providers are using spinning disks: https://www.backblaze.com/blog/hard-drive-reliability-stats-q1-2016/

Bates
Jun 15, 2006
Facebook uses both SSDs, HDRs and Blue Ray discs depending on the data.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord
I imagine there would be a little lag in the biggest of the big data/enterprise companies just because the bigger you get with data the more every little detail matters. Even if $/byte was made equal to HDDs by a wizard right this second.

Like I can rip the hard disk out of a desktop and put an SSD in the same slot and it just works, they are equivalent. But when you get to ultra high end enterprise stuff they aren't. Like really really high end data storage does stuff like worry about the disk geometry because reading from the edge and the center have nanosecond differences in speed. Or have robots grabbing and replacing drives based on S.M.A.R.T reports from the drives. Or like having a bunch of VMs and just randomly changing the disk speed would do havoc to the CPU loads or memory usage. Or the way best practices for RAID is different for SSDs than HDDS.

Like I bet there is even somewhere where just the physical weight differance between 100 HDDs and 100 SSDs would mess up something somehow.

Super minor/trivial stuff. But a million little "someone needs to release new hardware, rewrite this software, recalculate this stuff" that will keep people running HDDs until they really need SSDs or one generation of all new equipment is needed anyway.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord
Going back to space stuff space X is apparently phasing out every rocket design except their ultra ultra heavy BFR "big loving rocket"

https://techcrunch.com/2017/09/28/everything-spacex-revealed-about-its-updated-plan-to-reach-mars-by-2022/


It's apparently 150,000lbs to low earth orbit if you use it reliably and 250,000 if you expend it. Which is like 10 times the capacity of pretty much any other rocket that has ever been in common use.

Morbus
May 18, 2004

So I was reading that article and:

A Crazy Person posted:

Regarding the propulsive landing required for landing on Mars, Musk noted that SpaceX has been perfecting that with Falcon 9 – “That’s what they’ve been doing across 16 successful landings in a row,” he said.. “And that’s really without any redundancy. The Falcon 9 lands on a single engine, he added and when you have high reliability with single engine, then you can land with either of two engines (which the BFR will have), and you probably can achieve landing reliability on par with most commercial airlines.

It's been less than a year and only ~7 landings since they acheived anything better than a 50% success rate with Falcon 9. I don't think adding another engine and doing everything on Mars is going to get you "on par" with the literal one-in-a-million failure rate of commercial airlines...Success rates for anything involving space travel don't tend to get much better than the high 90's.

Adbot
ADBOT LOVES YOU

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Morbus posted:

Success rates for anything involving space travel don't tend to get much better than the high 90's.

Which is bad and should be worked on, not something that should be accepted. Early jet planes were dangerous as hell and it wasn't magic that made them the safest and most reliable method to travel.

  • Locked thread