Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
Spatial
Nov 15, 2007

Colostomy Bag posted:

Showing my age, but I always thought it was cool to get Kickstart images for the Amiga (the leaked ones with the dev notes during the 2.0 and 3.0 releases) and they would go to great lengths explaining how they saved 200 bytes here or 30 cycles there.

Granted it was a different time and obviously a programmers time is cheaper than storage and my god they had storage constraints to keep to. But as we both know the pendulum swings and perhaps someday we will get back to efficient code.
I actually do this kind of development. There are lots of embedded applications where this is still vital. :)

Adbot
ADBOT LOVES YOU

Duckaerobics
Jul 22, 2007


Lipstick Apathy
Semiconductor technology is really interesting right now, and I think we are in a swing toward more efficient code. We are rapidly approaching the physical limits for minimum feature size on silicon based transistors, and industry has been slow to adopt new technologies. A large part of the improvements to processor speed over the past 10 years or so has been from architecture and multi-threading rather than an increase in the number of transistors. Moore's law for silicon technologies is almost done, and I don't think we will see industry go below 4nm.

Hav
Dec 11, 2009

Fun Shoe

Toops posted:

The problem is, a lot of developers in the gaming industry are the worst kind of programmers. They're code-closet cowboys who think they're geniuses because no one can read their tangled web of spaghetti vomit, when in actual fact, they're junior-level hackers who couldn't write a hello world without stack.

I get mad props for being the comment king. My tickets are literally epic poetry with prepositions, tests, proof and fixes. It's not because I like developers; on the contrary, my black heart is sustained by the suffering of node developers. It's because I can't remember the loving thing I was poking at last week and going through my tickets gives me a really good write up of what was happening.

Be the developer you want to read.

Colostomy Bag posted:

Guess what I'm implying is the bloat is going past Moore's law. Lax underlying APIs, ABIs and other issues have bloated code (and more importantly assets) into a 'let's throw the kitchen sink into it' because what the hell, storage is cheap and we need the drat to compile.

Agreed. Java was an example because it tends to outstrip the hardware's ability to keep up, because the hardware is the poo poo your cable company throws at you.

Tank Boy Ken posted:

Because the Crusaders went there on a party: https://en.wikipedia.org/wiki/Crusades

:thejoke:

Duckaerobics posted:

Moore's law for silicon technologies is almost done, and I don't think we will see industry go below 4nm.

Only GPUs are really driving this, though. CPUs have largely plateued, which brings us back to the concept that we're just looking for a new change to come about, like the rise of GPUs in the first place. There's still a bunch of room in just increasing the form factor rather than the transistor density. Heat remains a limiter, though.

Hav fucked around with this message at 16:57 on Aug 10, 2017

Colostomy Bag
Jan 11, 2016

:lesnick: C-Bangin' it :lesnick:

Duckaerobics posted:

Semiconductor technology is really interesting right now, and I think we are in a swing toward more efficient code. We are rapidly approaching the physical limits for minimum feature size on silicon based transistors, and industry has been slow to adopt new technologies. A large part of the improvements to processor speed over the past 10 years or so has been from architecture and multi-threading rather than an increase in the number of transistors. Moore's law for silicon technologies is almost done, and I don't think we will see industry go below 4nm.

Oh for cripes sake, Roberts is moving beyond the speed of light and the limits of silicon to pull off his vision. He made a decree and CIG will make it so.

(But yeah, we're sort of in the twilight years here in regards to earthly limits with atoms and electrons, leakage and all that. Along with the hard path of lithography to reach us to the final destination.)

Pope Corky the IX
Dec 18, 2006

What are you looking at?
Do you think Ben is aware that the decrease in feeling within his lower extremities is not normal?

Toops
Nov 5, 2015

-find mood stabilizers
-also,

Pope Corky the IX posted:

Do you think Ben is aware that the decrease in feeling within his lower extremities is not normal?

Do you think he's ever experienced much feeling in his lower extremities? I mean, nothin' from nothin' ain't nothin'.

Nicholas
Mar 7, 2001

Were those not fine days, when we drank of clear honey, and spoke in calm tones of our love for the stuff?

Toops posted:

gently caress that dude. If you write lovely code because you let yourself get pushed around, if you don't have the backbone to either change the system or quit, then you're a lovely developer. It doesn't matter how "good" you are in theory. This is Star Citizen doublethink. "They're actually really good developers, they just write poo poo code that doesn't work because" reads a lot like "This game is really amazing it's just an unplayable pile of poo poo right now because"

Agreed.

Going back even further, if you know theres going to be more than one "type" of something in your game and you don't immediately start off with a base class to inherit properties and methods from, you're a lovely programmer.

It's telling that the example they used was "missles" as spaceship missles were probably not something that came stock with cryengine, and they had to code (improperly) from scratch.

Mirificus
Oct 29, 2004

Kings need not raise their voices to be heard


XK
Jul 9, 2001

Star Citizen is everywhere. It is all around us. Even now, in this very room. You can see it's fidelity when you look out your window or when you watch youtube

Hav posted:

Processor cycles will always be cheaper than programmer time, but the moore's law expansion means that we have a lot of headroom for inefficiency, which is one of the reasons why we're interpreting rather than compiling most of the time. There hasn't really been much of change since OO arrived, though. RISC came and went with the vastly increased clock speeds. XK has a better handle on processors, but we went from optimising queries in 1997 to throwing in abstraction layers to increase portability in 2003 and finally to devolving things into key/value stores and nosql at around 2010.

RISC was awesome back when assembly language opcodes were hardwired silicon. Long pipeline 5 ghz CPUs were awesome before they hit physical limits on silicon speed and complexity limits on the pipelines.

RISC is out of fashion for general purpose CPUs, and Intel chucked NetBurst architecture in the bin, for a reason.

Now everything is running hard toward parallel processing, which I believe we're just past the infancy and into toddler stage for CPUs. GPUs are really wonderful there (NVIDIA TITAN Xp, 3840 shader processors, imagine a 3840 core CPU) but have much simpler instruction sets, and vastly less issues with data interdependency, than a general purpose CPU. This is why modern graphics cards are so mind blowing, and wreck CPUs at very special purpose types of scientific calculations.

You can't even write assembly at the silicon level anymore. Modern CPUs take assembly language opcodes, like the highly complex x86_64 instruction set, and run it through an interpreter before the actual desired operation ever hits the silicon gates that calculate it. That's what the microcode on CPUs is. The more complex the op is, or the less likely it is to occur, the more likely it will be implemented purely in microcode.

The "bare metal" assembly language is broken down into even simpler OPs on the silicon via the microcode instructions. There's like ~3,500+ distinct x86_64 instructions. Try hardwiring that in silicon without both blowing up your die size and having a pathological flaw. Some of the registers can't even be touched except by the microcode. The registers cited in the object code often even aren't real, and only exist ethereally. There's kind of a RISC-like thing going on under the hood of the CPU itself, where a single asm instruction gets implemented via multiple lower level operations, and tricks behind the curtain. You can never see or touch any of that unless you have Intel/AMD microcode encryption and signing keys. The CPU itself is basically a loving a VM environment.

We essentially have a shadow RISC on modern CPUs, and threw away deep pipelines for multiprocessing. Plus, we also gained a lot of a multiprocessing on each individual core, which can run handfuls of operations and calculations side by side all alone, single threaded, with any spare resources going into hyperthreading via virtual cores.

We utilized all this amazing technological development in making super powerful CPUs, and bundling 8 or more on a postage stamp, sold for a few hundred bucks, as an excuse to make lovely inefficient code because now processing is cheap.

Now Chris Roberts is abusing all this to make 750k poly ships you can shove a vending machine into. :(

I just wrote all of this because I'm pouring one out on the floor for all my homies: those countless processing cycles lost to cheap inefficiency.

Eldragon
Feb 22, 2003

Missiles did come stock in cry engine, and worked fairly well when used as a massive swarm (see MWLL or MW:O). Even the arcane list of missle properties in SC are there in MWLL if I remember correctly.

I suspect CIG just hosed it all up because they moved everything to be server authoritative. It would also explain why they netcode has been getting steadily worse, the early days of SC (aka Arena Commander) worked halfway decently because it was still mostly client side.

XK
Jul 9, 2001

Star Citizen is everywhere. It is all around us. Even now, in this very room. You can see it's fidelity when you look out your window or when you watch youtube

lol

I think I take some of this almost personal.

People like Archimedes, Euclid, Pythagoras, Euler, and countless others throughout history, changed the course of human history scratching out equations by hand, many of them on wax tablets.

Now, more calculations than all of them ever did collectively, in their entire lives, are used to push around a Starfarer.

Tokyo Sexwale
Jul 30, 2003


they thought they played an alpha but what they ended up getting was a triple AAA experience with more depth than all other games put together throughout history :smuggo:

Pope Corky the IX
Dec 18, 2006

What are you looking at?

Toops posted:

Do you think he's ever experienced much feeling in his lower extremities? I mean, nothin' from nothin' ain't nothin'.

You are a terrible person.

Colostomy Bag
Jan 11, 2016

:lesnick: C-Bangin' it :lesnick:

XK posted:

RISC was awesome back when assembly language opcodes were hardwired silicon. Long pipeline 5 ghz CPUs were awesome before they hit physical limits on silicon speed and complexity limits on the pipelines.

RISC is out of fashion for general purpose CPUs, and Intel chucked NetBurst architecture in the bin, for a reason.


ARM? Given it's acronym, fairly prevalent as a RISC architecture. But yeah, in general RISC was the second coming of Christ and never panned out and like you said most CISC instructions are decoded into basic RISC instructions.

One of the easiest CPUs to emulate is the MIPS architecture. The hardest thing back in the day was being limited to how many registers you had available for your average CPU. SPARC/MIPS had a ton of registers. But there was a caveat, there were "register windows" where you need to slide them around.

But I digress....good times. Alpha. 275Mhz when the Pentium could only hit 60/66. Difference between CISC/RISC, pipelining and all that jazz.

Mirificus
Oct 29, 2004

Kings need not raise their voices to be heard


Percelus
Sep 9, 2012

My command, your wish is

what a bunch of rubes

Hav
Dec 11, 2009

Fun Shoe

XK posted:

RISC was awesome back when assembly language opcodes were hardwired silicon. Long pipeline 5 ghz CPUs were awesome before they hit physical limits on silicon speed and complexity limits on the pipelines.

RISC is out of fashion for general purpose CPUs, and Intel chucked NetBurst architecture in the bin, for a reason.

Now everything is running hard toward parallel processing, which I believe we're just past the infancy and into toddler stage for CPUs. GPUs are really wonderful there (NVIDIA TITAN Xp, 3840 shader processors, imagine a 3840 core CPU) but have much simpler instruction sets, and vastly less issues with data interdependency, than a general purpose CPU. This is why modern graphics cards are so mind blowing, and wreck CPUs at very special purpose types of scientific calculations.

You can't even write assembly at the silicon level anymore. Modern CPUs take assembly language opcodes, like the highly complex x86_64 instruction set, and run it through an interpreter before the actual desired operation ever hits the silicon gates that calculate it. That's what the microcode on CPUs is. The more complex the op is, or the less likely it is to occur, the more likely it will be implemented purely in microcode.

The "bare metal" assembly language is broken down into even simpler OPs on the silicon via the microcode instructions. There's like ~3,500+ distinct x86_64 instructions. Try hardwiring that in silicon without both blowing up your die size and having a pathological flaw. Some of the registers can't even be touched except by the microcode. The registers cited in the object code often even aren't real, and only exist ethereally. There's kind of a RISC-like thing going on under the hood of the CPU itself, where a single asm instruction gets implemented via multiple lower level operations, and tricks behind the curtain. You can never see or touch any of that unless you have Intel/AMD microcode encryption and signing keys. The CPU itself is basically a loving a VM environment.

We essentially have a shadow RISC on modern CPUs, and threw away deep pipelines for multiprocessing. Plus, we also gained a lot of a multiprocessing on each individual core, which can run handfuls of operations and calculations side by side all alone, single threaded, with any spare resources going into hyperthreading via virtual cores.

We utilized all this amazing technological development in making super powerful CPUs, and bundling 8 or more on a postage stamp, sold for a few hundred bucks, as an excuse to make lovely inefficient code because now processing is cheap.

Now Chris Roberts is abusing all this to make 750k poly ships you can shove a vending machine into. :(

I just wrote all of this because I'm pouring one out on the floor for all my homies: those countless processing cycles lost to cheap inefficiency.

There's a reason I noped out of assembly really early, and it was mostly everything you wrote. 'The' metal is like my relationship to music; I admire from afar, rather than being involved with. Once I had aspirations, but now I mainly make sure there is access to pictures of cats.

Pope Corky the IX posted:

You are a terrible person.

trap sprung.


Tarkaroshe has found a dictionary. You think he's found out that it doesn't define 'gullible'?

Percelus
Sep 9, 2012

My command, your wish is

*backers repeat credulous claims from crobblers* SHOWED YOU GOONIES

Pope Corky the IX
Dec 18, 2006

What are you looking at?

Hav posted:

trap sprung.

Spring all the traps you'd like, I'll still take advantage of every one of you. Especially now that your Lord has abandoned you.

Hav
Dec 11, 2009

Fun Shoe

Pope Corky the IX posted:

Spring all the traps you'd like, I'll still take advantage of every one of you. Especially now that your Lord has abandoned you.

tane

Ramadu
Aug 25, 2004

2015 NFL MVP


XK posted:

lol

I think I take some of this almost personal.

People like Archimedes, Euclid, Pythagoras, Euler, and countless others throughout history, changed the course of human history scratching out equations by hand, many of them on wax tablets.

Now, more calculations than all of them ever did collectively, in their entire lives, are used to push around a Starfarer.

this owns lmao

current humans are the best humans lmao

Pope Corky the IX
Dec 18, 2006

What are you looking at?

Would you rather by mouth gently pass by your pubic hair, or let some corporate rear end in a top hat dictate the rest of your life?

XK
Jul 9, 2001

Star Citizen is everywhere. It is all around us. Even now, in this very room. You can see it's fidelity when you look out your window or when you watch youtube

Colostomy Bag posted:

ARM? Given it's acronym, fairly prevalent as a RISC architecture. But yeah, in general RISC was the second coming of Christ and never panned out and like you said most CISC instructions are decoded into basic RISC instructions.

I love ARM, for low power draw, and low chip cost. CISC just beats it when you can plug into a wall, and when larger die size isn't an issue. CISC, counterintuitively, will beat RISC, on pure speed, due to instruction efficiency. The much wider instruction set allows more efficient code usage, cutting down on code fetching and decoding, simply because you have a wider selection of operations to choose from, and can choose the very precise operation you need. Bizarrely, 20 years ago, the opposite was true, and the fixed instruction size of RISC was faster.

quote:

SPARC/MIPS had a ton of registers. But there was a caveat, there were "register windows" where you need to slide them around.

poo poo like this is a nightmare. Invisible microcode handling the actual silicon registers is so much better. Hardware dynasties were won and lost because something was easy or hard to program for.

tooterfish
Jul 13, 2013

Pope Corky the IX posted:

Would you rather by mouth gently pass by your pubic hair, or let some corporate rear end in a top hat dictate the rest of your life?
Did your auto erotic asphyxiation habit kill off the language centre of your brain?

big nipples big life
May 12, 2014

Pope Corky the IX posted:

Especially now that your Lord has abandoned you.

Didn't you see the announcement? Lowtax is back and he's serious this time.

XK
Jul 9, 2001

Star Citizen is everywhere. It is all around us. Even now, in this very room. You can see it's fidelity when you look out your window or when you watch youtube

Hav posted:

There's a reason I noped out of assembly really early, and it was mostly everything you wrote. 'The' metal is like my relationship to music; I admire from afar, rather than being involved with. Once I had aspirations, but now I mainly make sure there is access to pictures of cats.

I'm currently getting out of any programming as a profession. I got into learning all that poo poo because it's amazing what power is in a modern CPU, especially harnessed at the lowest possible level. The flipside of that coin is that programming for anyone's purposes except your own is kind of miserable. You end up making a hologram of a guy teleconferencing from a desk.

I made the decision I will only program for myself.

I'm moving into IT security. Pays better, and rarely has crunch time.

Colostomy Bag
Jan 11, 2016

:lesnick: C-Bangin' it :lesnick:

XK posted:

I love ARM, for low power draw, and low chip cost. CISC just beats it when you can plug into a wall, and when larger die size isn't an issue. CISC, counterintuitively, will beat RISC, on pure speed, due to instruction efficiency. The much wider instruction set allows more efficient code usage, cutting down on code fetching and decoding, simply because you have a wider selection of operations to choose from, and can choose the very precise operation you need. Bizarrely, 20 years ago, the opposite was true, and the fixed instruction size of RISC was faster.


poo poo like this is a nightmare. Invisible microcode handling the actual silicon registers is so much better. Hardware dynasties were won and lost because something was easy or hard to program for.


Yeah, I hate to derail this thread that always stays on topic further but there are a ton of architectures that are more efficient with what we have now. x86 is really a mess in regards looking at it from the assembly/machine instruction point world of view. From a CPU designer prospective dealing with opcodes that have different sizes must be hell in their decoder branch of silicon.

While I commend AMD for introducing x64 and lit a fire under Intel's rear end they've somewhat crippled innovation.

LastCaress
May 8, 2004

bonobo
All I know is I can make my arduino use less than 20 µA :|

Toops
Nov 5, 2015

-find mood stabilizers
-also,

Pope Corky the IX posted:

You are a terrible person.

Uh oh

Polish Avenger
Feb 13, 2007
has an invalid opinion.

XK posted:

RISC was awesome back when assembly language opcodes were hardwired silicon. Long pipeline 5 ghz CPUs were awesome before they hit physical limits on silicon speed and complexity limits on the pipelines.

RISC is out of fashion for general purpose CPUs, and Intel chucked NetBurst architecture in the bin, for a reason.

Now everything is running hard toward parallel processing, which I believe we're just past the infancy and into toddler stage for CPUs. GPUs are really wonderful there (NVIDIA TITAN Xp, 3840 shader processors, imagine a 3840 core CPU) but have much simpler instruction sets, and vastly less issues with data interdependency, than a general purpose CPU. This is why modern graphics cards are so mind blowing, and wreck CPUs at very special purpose types of scientific calculations.

You can't even write assembly at the silicon level anymore. Modern CPUs take assembly language opcodes, like the highly complex x86_64 instruction set, and run it through an interpreter before the actual desired operation ever hits the silicon gates that calculate it. That's what the microcode on CPUs is. The more complex the op is, or the less likely it is to occur, the more likely it will be implemented purely in microcode.

The "bare metal" assembly language is broken down into even simpler OPs on the silicon via the microcode instructions. There's like ~3,500+ distinct x86_64 instructions. Try hardwiring that in silicon without both blowing up your die size and having a pathological flaw. Some of the registers can't even be touched except by the microcode. The registers cited in the object code often even aren't real, and only exist ethereally. There's kind of a RISC-like thing going on under the hood of the CPU itself, where a single asm instruction gets implemented via multiple lower level operations, and tricks behind the curtain. You can never see or touch any of that unless you have Intel/AMD microcode encryption and signing keys. The CPU itself is basically a loving a VM environment.

We essentially have a shadow RISC on modern CPUs, and threw away deep pipelines for multiprocessing. Plus, we also gained a lot of a multiprocessing on each individual core, which can run handfuls of operations and calculations side by side all alone, single threaded, with any spare resources going into hyperthreading via virtual cores.

We utilized all this amazing technological development in making super powerful CPUs, and bundling 8 or more on a postage stamp, sold for a few hundred bucks, as an excuse to make lovely inefficient code because now processing is cheap.

Now Chris Roberts is abusing all this to make 750k poly ships you can shove a vending machine into. :(

I just wrote all of this because I'm pouring one out on the floor for all my homies: those countless processing cycles lost to cheap inefficiency.

That's actually crazy. I got halfway through a NAND to Tetris course (stopped at building an assembler in the language of my choice) and it was the coolest thing. I knew modern hardware had to be more complicated by orders of magnitude than the 16 bit platform the course used but I had no idea it was like that. At the same time, it should be no surprise because it seems like there is no limit to the utility of adding more abstraction.

XK
Jul 9, 2001

Star Citizen is everywhere. It is all around us. Even now, in this very room. You can see it's fidelity when you look out your window or when you watch youtube

Ramadu posted:

this owns lmao

current humans are the best humans lmao

Imagine if you gave Archimedes, the greatest mathematician of all time, a programmable calculator.

quote:

At the capture of Syracuse Marcellus had been aware that his victory had been held up much and long by Archimedes’ machines. However, pleased with the man’s exceptional skill, he gave out that his life was to be spared, putting almost as much glory in saving Archimedes as in crushing Syracuse. But as Archimedes was drawing diagrams with mind and eyes fixed on the ground, a soldier who had broken into the house in quest of loot with sword drawn over his head asked him who he was. Too much absorbed in tracking down his objective, Archimedes could not give his name but said, protecting the dust with his hands, “I beg you, don’t disturb this,” and was slaughtered as neglectful of the victor’s command; with his blood he confused the lines of his art.

Dude was working out math on his hands and knees while invading soldiers rampaged his city. Faced with imminent death, he was all, "Hold up, I'm working on some math in the dirt here. Don't touch it." Then he got wrecked with a sword.

Give that dude crobbler's resources and we'd all be universe brains right now.

Variable 5
Apr 17, 2007
We do these things not because they are easy, but because we thought they would be easy.
Grimey Drawer

Yes, you mustn't make fun of the horrible fat person for being horribly fat.

MilesK
Nov 5, 2015

It's the start of the Evocati release window today! ... again.


https://www.reddit.com/r/starcitizen/comments/6srpxi/happy_avocado_test_window_day/


Also, Burndown starts today! Here's a clip from the twitter preview.

https://media.giphy.com/media/l1J3P77lAOIOmzZBe/giphy.mp4

Tank Boy Ken
Aug 24, 2012
J4G for life
Fallen Rib

MilesK posted:

Also, Burndown starts today! Here's a clip from the twitter preview.

https://media.giphy.com/media/l1J3P77lAOIOmzZBe/giphy.mp4

This is good for Star Citizen.

Danknificent
Nov 20, 2015

Jinkies! Looks like we've got a mystery on our hands.
has anyone posted this yet

Jonny Nox
Apr 26, 2008




Has someone told the unsung story of Unsung Story?

It is why you need to get a refund.

http://www.techtimes.com/articles/212238

Edit:
From an old Kotaku article


quote:

Playdek seems to have switched focus on Unsung Story to make it more of a multiplayer game, and that their new development timeline is all about player vs. player combat rather than the narrative-heavy single-player game that the Kickstarter initially promised.

Jonny Nox fucked around with this message at 19:31 on Aug 10, 2017

Tijuana Bibliophile
Dec 30, 2008

Scratchmo

Danknificent posted:

has anyone posted this yet



I approve of this post

Tijuana Bibliophile
Dec 30, 2008

Scratchmo

Jonny Nox posted:

Has someone told the unsung story of Unsung Story?

It is why you need to get a refund.

http://www.techtimes.com/articles/212238

Edit:
From an old Kotaku article

now these guys will make it



lol

Toops
Nov 5, 2015

-find mood stabilizers
-also,

Variable 5 posted:

Yes, you mustn't make fun of the horrible fat person for being horribly fat.

:confused:

Adbot
ADBOT LOVES YOU

XK
Jul 9, 2001

Star Citizen is everywhere. It is all around us. Even now, in this very room. You can see it's fidelity when you look out your window or when you watch youtube

Hav posted:

There's a reason I noped out of assembly really early, and it was mostly everything you wrote. 'The' metal is like my relationship to music; I admire from afar, rather than being involved with. Once I had aspirations, but now I mainly make sure there is access to pictures of cats.

Oh, hey, also, do you want to get into chipset architecture defined data line sizes? Arranging your data in memory to best prevent cache prefetches from stomping on it, while keeping the highest locality, determinant by the "x" of your "x-way associative cache"? Separating some of it enough so the different banks read faster?

I ordered a weirdly translated book written by a russian to learn retarded obscure memory optimizations. I think I paid triple retail for Robbins "Debugging Applications for Windows" after hunting down a used copy. Thankfully Intel offered their 6 volume, 4,000 page CPU architecture guide free for anyone who asked.

Learning real asm is a nightmare. Not example program asm, but asm spit out by a compiler, with all the optimizations. It's wild poo poo. You need to learn things like there's a flag you can set to make your processor read backwards.

I almost went to Russia to learn IDA disassembler stuff from Kaspersky. Thank God I didn't, or I could be involved in a Russia investigation right now.

Then Scientologists wanted to fly me down to Clearwater Florida to hire me for whatever AV operation they were running. They said they were going to first-class me, and pick me up in a limo. Thank God I was like, "Hmm, Clearwater, wait a minute. Oh gently caress, Scientology, NOPE."

All because 17 year old me wanted to learn how to reverse engineer registration protection schemes because of the Quake demo disc.

gently caress programming.

  • 1
  • 2
  • 3
  • 4
  • 5