Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Mezzanine
Aug 23, 2009

Suspicious Dish posted:

Continuing with my theme of reverse engineering lots of random game formats...



http://magcius.github.io/zelview.js/zelview.html

I will pay you money if you can do something similar with Mario Kart 8 or Super Mario 3D World. I would just LOVE to be able to fly around and look at the stages from MK8 because some of them are gorgeous!

I know last time I was looking around, the formats for Mario Kart stages (at least Mario Kart Wii and 7) had been mapped out, so some guys were putting out the models for the karts and characters, and there was at least a way to see the 3D data for the stages. Getting it textured would be a whole different beast, I'd imagine.

Mezzanine fucked around with this message at 03:45 on Jun 22, 2015

Adbot
ADBOT LOVES YOU

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Mezzanine posted:

I will pay you money if you can do something similar with Mario Kart 8 or Super Mario 3D World. I would just LOVE to be able to fly around and look at the stages from MK8 because some of them are gorgeous!

I know last time I was looking around, the formats for Mario Kart stages (at least Mario Kart Wii and 7) had been mapped out, so some guys were putting out the models for the karts and characters, and there was at least a way to see the 3D data for the stages. Getting it textured would be a whole different beast, I'd imagine.

How much money

Zaphod42
Sep 13, 2012

If there's anything more important than my ego around, I want it caught and shot now.

Suspicious Dish posted:

Continuing with my theme of reverse engineering lots of random game formats...



http://magcius.github.io/zelview.js/zelview.html

drat that's slick Dish. I love poo poo like this. I looked into the everquest map formats forever ago and some guy on the internet reverse engineered the Dark Souls maps and I had a good time looking around at the data, OOT is pretty much as good as it gets.

As a kid all I wanted was for Nintendo to like open source OOT so I could actually maybe make some of my own levels or games with that engine. Not really Nintendo's style unfortunately (Although Mario Maker is gonna be loving sweet).

The original design of the first NES Zelda was supposed to let you design levels for your friends to complete, but they abandoned the idea when they dropped the disk drive.

Snapchat A Titty posted:

This owns. I've hosed around with some image & sound formats from old games, but never anything 3D. You got any hints as to process aside from setting breakpoints and disassembling rendering code? Like, patterns that may point in some direction & what not.

Yeah I'm also super curious about how you went about this. How did you dump the data, how did you figure out the format, and how did you import that into your renderer? And what are you even rendering with here? Custom javascript openGL? drat, that's slick.

Zaphod42 fucked around with this message at 17:58 on Jun 22, 2015

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
I was going to make a whole effortpost about it but it's sort of hard and complex and requires understanding how the N64 (and Gamecube GPU, for bmdview.js) work. The graphics data is in a thing called a "display list", which is a series of commands to the RSP and RDP, which are the sort of graphics coprocessors on the N64. But it's not that simple, because those run some sort of firmware called "microcode". There's lots of microcodes for the N64 (some games use up to six!), but thankfully OoT only uses two: for some simple menus and other things, it uses the S2DEX microcode (which is for 2D sprites), and for drawing the game world, it uses the F3DEX2 microcode.

Googling some of those terms will get you pretty far. Nintendo formats tend to use the GPU almost directly, and emulators have been written for basically every system out there, so that's a big help.

So yeah, the majority of the work was working on an emulator to render F3DEX2 display list commands. And that's over here: https://github.com/magcius/zelview.js/blob/gh-pages/f3dex2.js

I don't tend to like using libraries like three.js, they mostly get in my way for extremely special cases like this. So I just write it all by hand. My code isn't the cleanest, but between that and bmdview.js, which implements a similar viewer for Gamecube models, I think I have a good way to clean it up, combine the two and make a generic model viewer system. I might hack on that this weekend if I have time.

Collision geometry (the wireframe), waterboxes (the blue around water) are stored separately and my drawing just attempts to visualize them. That's just custom drawing.

As for dumping data from the ROM, I used a debug ROM which had a filesystem hidden inside it.

After that, I invented my own simple packfile format to store the different files in since I didn't want to shove a 64MB ROM on the internet and have people download it. You can see my absolute trash code to extract data from the ROM into its own custom format here: https://github.com/magcius/zelview.js/blob/gh-pages/rom.js

Zaphod42
Sep 13, 2012

If there's anything more important than my ego around, I want it caught and shot now.
Okay, so the data itself is stored as instructions to be loaded in F3DEX2 format? So then its just a matter of finding all the right data for a given level, and then being able to render from that format, I'm with you I think.

So did you just like play through the game on emulator until you get to the level in question, and then dump the current loaded instructions for the graphics data, or like set a breakpoint and find where it starts reading the level data on transition, or what? How did you break out the different levels?

I guess the texture image offset is what tells you which texture to use for a given face? Is there a base offset value for each level, or does each face tell you the texture from within a giant array of textures in the entire game?

Your renderer only appears to load the necessary textures, but you could be doing that after you process the levels and separate everything out. Although knowing the N64 probably only loaded a given set of textures at a time I'm curious how that was arranged. Did the engine just look through all the vertex data of a level and only load the textures referenced, or was there a set of level textures and it'd load a set on transition?

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
As mentioned, I wrote my own extractor after I found all the files in the debug ROM through its filesystem table.

Textures are a bit complicated. There are eight texture slots (known as "tiles", but I'm extremely inconsistent and wrong with this terminology in my code -- but I call the slot number "tileIdx" consistently). In most cases, the game only uses slot 0 (almost), so I just ignore any attempts to load any other texture, since that looks wrong.

It's a bit complicated, because the RDP coprocessor has separate "texture memory", TMEM, but I just pull a hack and don't emulate that properly.

Anyway, the command stream to upload a texture looks like this:

code:
TEXTURE 0 // Use texture slot 0.
SETTIMG // Set the texture address to upload from.
SETTILE 7 // Set the parameters for texture slot 7, which is where we load new textures into. This includes the address into TMEM it should be stored.
LOADBLOCK 7 // Upload the texture.
SETTILE //  Set the parameters for texture slot 0, which includes the address in TMEM where it should be. So, we "reuse" the address from slot 7.
SETTILESIZE // Set the size of texture slot 0.
Instead of emulating texture slots, TMEM, etc., I just recognize that stream of commands and then set things to use that. If I wanted to properly emulate the N64, I'd have to be a lot more careful about things.

See https://github.com/magcius/zelview.js/blob/gh-pages/f3dex2.js#L809-L818

The texture address itself includes a "bank". The address space on the N64 isn't virtual, it's segmented. There are 32 possible banks, I believe, and the game can choose which bank points where in ROM / RAM / whatever. The way that the Zelda engine works, it always makes bank 0x02 point into the scene file, and 0x03 point into the individual room files. The banks are switched out when different scenes and rooms are loaded. For reference, "scenes" are what you choose from the combobox, and "rooms" are sort of what they look like -- the individual rooms of a dungeon. I don't have a way of only rendering some rooms -- my viewer just loops through and renders them all.

Using that, I can look up an offset into the scene / room files I loaded, and the texture data is there.

This is a bit more complicated because of paletted textures / TLUTs. Textures can be paired with a palette, which is also loaded into TMEM through the LOADTLUT command. But it doesn't have to be "load TLUT, load texture" -- all that matters is what TLUT is loaded at the time you draw your triangles, and the game will commonly do those things in reverse order. So I simply wait until a TRI command is performed before fully decoding the texture.

See https://github.com/magcius/zelview.js/blob/gh-pages/f3dex2.js#L135-L152

The TRI1/TRI2 commands are the things that actually perform the drawing of individual triangles, so this is where sampling is done.

RoboCicero
Oct 22, 2009

"I'm sick and tired of reading these posts!"

clockwork automaton posted:

Oh wow this is really cool.

I've been just trying to get a basic UI on what might be the worst most boring game ever.


No way, this look great!

Mezzanine
Aug 23, 2009

Suspicious Dish posted:

How much money

Honestly, I'll throw you a few bucks on... the programmer equivalent of Patreon?

Zaphod42
Sep 13, 2012

If there's anything more important than my ego around, I want it caught and shot now.

Suspicious Dish posted:

As mentioned, I wrote my own extractor after I found all the files in the debug ROM through its filesystem table.

Textures are a bit complicated. There are eight texture slots (known as "tiles", but I'm extremely inconsistent and wrong with this terminology in my code -- but I call the slot number "tileIdx" consistently). In most cases, the game only uses slot 0 (almost), so I just ignore any attempts to load any other texture, since that looks wrong.

It's a bit complicated, because the RDP coprocessor has separate "texture memory", TMEM, but I just pull a hack and don't emulate that properly.

Anyway, the command stream to upload a texture looks like this:

code:
TEXTURE 0 // Use texture slot 0.
SETTIMG // Set the texture address to upload from.
SETTILE 7 // Set the parameters for texture slot 7, which is where we load new textures into. This includes the address into TMEM it should be stored.
LOADBLOCK 7 // Upload the texture.
SETTILE //  Set the parameters for texture slot 0, which includes the address in TMEM where it should be. So, we "reuse" the address from slot 7.
SETTILESIZE // Set the size of texture slot 0.
Instead of emulating texture slots, TMEM, etc., I just recognize that stream of commands and then set things to use that. If I wanted to properly emulate the N64, I'd have to be a lot more careful about things.

See https://github.com/magcius/zelview.js/blob/gh-pages/f3dex2.js#L809-L818

The texture address itself includes a "bank". The address space on the N64 isn't virtual, it's segmented. There are 32 possible banks, I believe, and the game can choose which bank points where in ROM / RAM / whatever. The way that the Zelda engine works, it always makes bank 0x02 point into the scene file, and 0x03 point into the individual room files. The banks are switched out when different scenes and rooms are loaded. For reference, "scenes" are what you choose from the combobox, and "rooms" are sort of what they look like -- the individual rooms of a dungeon. I don't have a way of only rendering some rooms -- my viewer just loops through and renders them all.

Using that, I can look up an offset into the scene / room files I loaded, and the texture data is there.

This is a bit more complicated because of paletted textures / TLUTs. Textures can be paired with a palette, which is also loaded into TMEM through the LOADTLUT command. But it doesn't have to be "load TLUT, load texture" -- all that matters is what TLUT is loaded at the time you draw your triangles, and the game will commonly do those things in reverse order. So I simply wait until a TRI command is performed before fully decoding the texture.

See https://github.com/magcius/zelview.js/blob/gh-pages/f3dex2.js#L135-L152

The TRI1/TRI2 commands are the things that actually perform the drawing of individual triangles, so this is where sampling is done.

Thanks for all of this, brilliant stuff. That all makes sense. You're making me want to poke around with 64 emulation in some way, pretty drat cool. I messed with writing NES and gameboy games but I haven't toyed around with emulators yet. (Other than playing on them)

clockwork automaton
May 2, 2007

You've probably never heard of them.

Fun Shoe

clockwork automaton posted:

I've been just trying to get a basic UI on what might be the worst most boring game ever.




More work on this - the liquid in the mug is actually changing colors over time:


Pi Mu Rho
Apr 25, 2007

College Slice
I made a simulator for an industrial robot arm.

It was a bugger to do, as it uses a controller that interfaces with a PLC, neither of which I had access to (except the PLC remotely on a few occasions, where latency made doing anything meaningful impossible anyway)

But it works. I got Unity and the PLC talking to each other and put together a usable demo for an expo:







The original plan was to go with a VR setup (which I implemented) but it was decided that was a bit impractical for having on the floor at an expo. So instead, two big screens were bolted to the front and side of the cab, which is why the screenshots are split into a front and side view.

Very brief video of it in action:

https://www.youtube.com/watch?v=mvfCqs_emYw

I'm rather proud of it, it turned out to be very popular at the expo.

Voronoi Potato
Apr 4, 2010

Pi Mu Rho posted:

I made a simulator for an industrial robot arm.

This looks fantastic! I'd love to see something like this at an arcade.

Pi Mu Rho
Apr 25, 2007

College Slice

Voronoi Potato posted:

This looks fantastic! I'd love to see something like this at an arcade.

If I'd had more time to work on it, I would have gamified it a bit more. I agree, it could work really well in an arcade. Maybe I should suggest they make one :)

sarehu
Apr 20, 2007

(call/cc call/cc)
A virtual claw machine! Sounds like a great way to prevent babies from getting stuck inside.

Dred_furst
Nov 19, 2007

"Hey look, I'm flying a giant dong"
After reading suggestions in this thread, I finally got around to having a go at writing a ray tracer, and it's suprisingly fun:


so far I've gotten diffuse and reflectance working, as well as gamma correction and exposure control. Next on the list is anti-aliasing of some sort (suggestions would be welcome) and some more object types.

Xerophyte
Mar 17, 2008

This space intentionally left blank

Dred_furst posted:

so far I've gotten diffuse and reflectance working, as well as gamma correction and exposure control. Next on the list is anti-aliasing of some sort (suggestions would be welcome) and some more object types.

Serious anti-aliasing stuff tends to touch heavily on filtering techniques and image reconstruction. You can formulate the problem as finding the underlying "infinite resolution" image from some limited set of point samples, and then filtering that 2d function down to whatever number of pixels you actually want. You may or may not be comfortable going very far down that rabbit hole, it can get pretty much arbitrarily complex. Still it helps as a framework: your job is computing pixels (which cover an area) from point samples (which don't).

With that in mind, start with doing something extremely simple that works and makes things look better. Stop when satisfied or bored. In some sort of order of increasing headache...

1. Take your first sample in the center of each pixel. It's always a good place to start. Then:
1a. Take 4x4 or so evenly spaced samples in a regular grid for each pixel.
1b. Uniformly sample random points in the pixel. Feel free to use rand(), you don't need quality in that sense.
1c. Uniform sampling is bad as a rule though so use some low discrepancy sequence like Halton and take the first N.
1d. Nvidia have patents on that, somehow, so look into general techniques for generating blue noise if you want to not infringe for whatever reason. I thought this was a nice paper on the subject from earlier this year if you want to be bleeding edge and all (you probably don't).

2. You now have N samples. You'll want to weigh them together to make a nice pixel. You can:
2a. Eh, just weigh the samples evenly. Sum all contributions and divide by N.
2b. You can get better anti-aliasing by favoring samples near the center of the pixel and allowing sampkes to infkuence all neighbouring pixels. Weight samples bilinearly with respect to pixel centers.
2c. Point (a) and box (b) kernel filters are all well and good, but don't you really want a nice, arbitrary width Gaussian kernel?
2d. No, of course not! You want an adaptive wavelet kernel that also influences your sample point choices, and make for a snappy (for academia, anyhow) YouTube video.

Doing complex things generally make it faster, not better. Doing something simp!e with a shitload of samples always works well enough.

Baloogan
Dec 5, 2004
Fun Shoe

Dred_furst
Nov 19, 2007

"Hey look, I'm flying a giant dong"

Xerophyte posted:

Serious anti-aliasing stuff tends to touch heavily on filtering techniques and image reconstruction. You can formulate the problem as finding the underlying "infinite resolution" image from some limited set of point samples, and then filtering that 2d function down to whatever number of pixels you actually want. You may or may not be comfortable going very far down that rabbit hole, it can get pretty much arbitrarily complex. Still it helps as a framework: your job is computing pixels (which cover an area) from point samples (which don't).

With that in mind, start with doing something extremely simple that works and makes things look better. Stop when satisfied or bored. In some sort of order of increasing headache...

1. Take your first sample in the center of each pixel. It's always a good place to start. Then:
1a. Take 4x4 or so evenly spaced samples in a regular grid for each pixel.
1b. Uniformly sample random points in the pixel. Feel free to use rand(), you don't need quality in that sense.
1c. Uniform sampling is bad as a rule though so use some low discrepancy sequence like Halton and take the first N.
1d. Nvidia have patents on that, somehow, so look into general techniques for generating blue noise if you want to not infringe for whatever reason. I thought this was a nice paper on the subject from earlier this year if you want to be bleeding edge and all (you probably don't).

2. You now have N samples. You'll want to weigh them together to make a nice pixel. You can:
2a. Eh, just weigh the samples evenly. Sum all contributions and divide by N.
2b. You can get better anti-aliasing by favoring samples near the center of the pixel and allowing sampkes to infkuence all neighbouring pixels. Weight samples bilinearly with respect to pixel centers.
2c. Point (a) and box (b) kernel filters are all well and good, but don't you really want a nice, arbitrary width Gaussian kernel?
2d. No, of course not! You want an adaptive wavelet kernel that also influences your sample point choices, and make for a snappy (for academia, anyhow) YouTube video.

Doing complex things generally make it faster, not better. Doing something simp!e with a shitload of samples always works well enough.

so I did some of 1c and 2b. This took ~2minutes to render instead of the previous 10. (32x sampling, I couldnt see the difference between this and 8) I hope I am doing this right :S

hooray less jaggies!

Could I apply the same oversampling techniques with the lights to produce softer shadows?

Red Mike
Jul 11, 2011
Cool, more goons into raytracing stuff.

You may want to look into why you're getting banding on the grey wall there. For soft shadows, what you should do is trace multiple rays for every point, each one slightly offset from the center (much like you'd do in a good supersampling algo).

I also recommend the layout I used for testing after adding refraction. Mostly clear sphere (or lens if you add more complex shapes), through which you can look at a grid of smaller spheres. You get accurate lensing, which looks very cool in screenshots.

Dred_furst
Nov 19, 2007

"Hey look, I'm flying a giant dong"

Red Mike posted:

Cool, more goons into raytracing stuff.

You may want to look into why you're getting banding on the grey wall there. For soft shadows, what you should do is trace multiple rays for every point, each one slightly offset from the center (much like you'd do in a good supersampling algo).

I also recommend the layout I used for testing after adding refraction. Mostly clear sphere (or lens if you add more complex shapes), through which you can look at a grid of smaller spheres. You get accurate lensing, which looks very cool in screenshots.

it's likely to be a floating point calculation bug when the colour is re-calculated to look better. It could also be the gamma correction. I'll look into it tomorrow.
Since earlier, I've multithreaded the rendering code, so that it renders horizontal slices of the image off thread then re-stiches them together. Upside, it renders significantly faster, as I can now use all 8 cores of this computer. So have a slightly higher resolution image:


I also mentioned the re-exposure thing. I split out the adjusting of the colours into the 0-1 range so that I could re-evaluate it quickly and adjust the renders to look better. A screenshot where I capture the entire range in a horribly compressed form:


And the app that invokes / previews the result:


It's worth noting that it's not very fast because it is completely written in c#.

edit: I'm using 32bit floating point numbers right now. It might be an idea to switch to doubles...

Xerophyte
Mar 17, 2008

This space intentionally left blank

Dred_furst posted:

edit: I'm using 32bit floating point numbers right now. It might be an idea to switch to doubles...

About the one place where doubles will ever make sense to me is to store object translations for infrastructure renderings. When the position of a doorframe is described in inches from the state capital of Idaho the precision actually helps. Beyond that (and a few other niche uses), gently caress 'em, you need bandwidth a lot more than you need precision. We definitely use halfs a lot more than we do doubles. Colors and HDR textures, specifically, should definitely not need more precision. They're one of the common places for halfs if you can be arsed to scale down to under 65.5k.

On soft shadows, you can do the same thing and have some fixed sample count but it'll have pretty bad discretisation issues as you shift from 4 occluded samples to 3 and the like. It'll work and you might prefer the results, but it won't be all that soft. You can still make that approach work by various tricks; rendering out the occlusion to a separate buffer and applying a blur before using it, for instance. Or by just taking a whole lot of samples. It'll be a bit of a pain, either way.

One option is to go path tracer with the shadows: for each primary ray, sample one pseudorandom point on the light for shadows and continue accumulating different primary rays with different shadow points until it looks good enough. If you want to stay with a Whitted-style tracer you're probably better of looking at typical GPU techniques, though. You can generate shadow maps (or variance shadow maps, or exponential shadow mas) with a ray tracer as well, after all, and you can afford to use pretty big PCF filters when evaluating them offline, as it were.

TheresaJayne
Jul 1, 2011

Dred_furst posted:

it's likely to be a floating point calculation bug when the colour is re-calculated to look better. It could also be the gamma correction. I'll look into it tomorrow.
Since earlier, I've multithreaded the rendering code, so that it renders horizontal slices of the image off thread then re-stiches them together. Upside, it renders significantly faster, as I can now use all 8 cores of this computer. So have a slightly higher resolution image:


I also mentioned the re-exposure thing. I split out the adjusting of the colours into the 0-1 range so that I could re-evaluate it quickly and adjust the renders to look better. A screenshot where I capture the entire range in a horribly compressed form:


And the app that invokes / previews the result:


It's worth noting that it's not very fast because it is completely written in c#.

edit: I'm using 32bit floating point numbers right now. It might be an idea to switch to doubles...

have you looked into open subdiv, they have some wickedly fast cuda code for high res rendering in realtime.

steckles
Jan 14, 2006

Dred_furst posted:

it's likely to be a floating point calculation bug when the colour is re-calculated to look better. It could also be the gamma correction. I'll look into it tomorrow.
Looks like quantization error in your display code. You're probably multiplying your computed colours by 255 and then sending them to your display surface. Look into dithering, it'll help solve this issue.

Anyway, it good to see another goon get into raytracing; It's a deep and fascinating field. I recommend getting a copy of Physically Based Rendering: From Theory To Implementation if you want to go further. It covers just about everything you'd want to know about writing a sophisticated raytracer.

movax
Aug 30, 2008

steckles posted:

Anyway, it good to see another goon get into raytracing; It's a deep and fascinating field. I recommend getting a copy of Physically Based Rendering: From Theory To Implementation if you want to go further. It covers just about everything you'd want to know about writing a sophisticated raytracer.

I bought this ages ago and never cracked it open -- wish I had the time to dive into it. Pretty sure that's goon written as well.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
That book is great. And gently caress you all for reminding me that I don't have the time to dick around with my pet raytracer either :mad:

hendersa
Sep 17, 2006

I have been spending so much time working that I haven't had too much time to work on my BeagleBone Black side projects! I set aside a day and went through my todo list. One of the pesky things that I do that most people don't realize is that I test my stuff with a wide variety of hardware (which takes a lot of time). I spent five or six hours building static kernel device trees to support twelve different hardware configurations of the BBB: five LCD capes and the built-in HDMI all with and without the audio cape. There is a conflict between every LCD cape's GPIO buttons and a pin used by the audio cape, so I hacked on their device tree entries to isolate the conflicting pinmux settings and generate my own custom .dtbs:

https://github.com/hendersa/bes-dtb-rebuilder

With that all sorted out, I got to work:



Video:

https://www.youtube.com/watch?v=HL1pg13f5Z0

I also fixed a few bugs, reworked the GUI to be more usable on the smaller LCDs, removed a lot of warnings and dead code, switched over to using the <stdint.h> data types in many places, and added support in the GUI for jumping whole screens of games at a time. I'm slowly moving towards the next generation of BeagleSNES: the Beagle Entertainment System (BES).

https://github.com/hendersa/bes

People still send me nasty mails telling me that I'm lazy because I don't support the Nintendo 64, Sega Genesis, Waffle Iron 128 or whatever the hell else people want to play with. :eng99: I point these folks to open source emulators and tell them "hey, help yourself!" and I get these crazy, foaming-at-the-mouth mails back.

These types of mails usually make my day. :toot:

hendersa fucked around with this message at 21:17 on Jul 1, 2015

gbut
Mar 28, 2008

😤I put the UN🇺🇳 in 🎊FUN🎉


So, there are people familiar with and using BB, but that have no regard for the effort you already put into creating a really cool piece of software? That they use? I would have really hard time not responding to those entitled fucks with the meatspin link or something...

hendersa
Sep 17, 2006

gbut posted:

So, there are people familiar with and using BB, but that have no regard for the effort you already put into creating a really cool piece of software? That they use? I would have really hard time not responding to those entitled fucks with the meatspin link or something...

Well, that may not be the case. I package my stuff so that it is completely stand-alone. You write a complete image to a microSD card, pop it into the BBB, and it just works. Plug in an LCD cape or hook it up to any TV and it fires up and does its thing. There isn't a Debian package to apt-get my stuff into an already-running system, and there is no building the user apps or kernel from source. For someone who is brand new to the platform, they can get up and running in short order with very little knowledge. When they want to do something else, they pop out the microSD card and the system is no longer a game console. So, I doubt these people are BeagleBone pros, or even active software developers.

Emulation also seems to attract a specific type of user that can be a bit difficult to deal with sometimes.

The idea of a recompile or, even worse, cloning a git repo and building it from scratch with all of the necessary dependencies is a bit overwhelming. Without an appreciation of what is involved, I can see how they would consider it to be simple for me and only difficult for them because they don't happen to know the steps required. I think a lot of these folks also assume I'm just taking pre-made emulators and adding the code for them verbatim into my system without doing any real work. i just :regd11: a little bit and then BAM... Mario is on the family TV.

Overall, I just choose to be optimistic and believe that the people using my stuff and enjoying it aren't contacting me because they are happy with it. That's why I like sharing this stuff with you folks. You're all savvy enough to appreciate the details and I can bounce ideas off of you, while the people that contact me are just angry about what features aren't there. :unsmith:

hendersa fucked around with this message at 00:02 on Jul 2, 2015

SystemLogoff
Feb 19, 2011

End Session?

You're totally right, you more than likely have a happy quiet fanbase that likes it just working.

There are always loud ones.

Doh004
Apr 22, 2007

Mmmmm Donuts...

hendersa posted:

People still send me nasty mails telling me that I'm lazy because I don't support the Nintendo 64, Sega Genesis, Waffle Iron 128 or whatever the hell else people want to play with. :eng99: I point these folks to open source emulators and tell them "hey, help yourself!" and I get these crazy, foaming-at-the-mouth mails back.

These types of mails usually make my day. :toot:

Please post some prime examples!

hendersa
Sep 17, 2006

Doh004 posted:

Please post some prime examples!

One period is never enough posted:

i tld you the dgen source is availeble and it obviously works on arm because retropi uses it.....why dont you want to include it......you only have nes platforms and you are missing the games for genesis it is easy to add...nes games arent very good....

Playstation expert posted:

Look, you obvisouly think Im lying so look at the proof:
http://lifehacker.com/get-a-ps1-emulator-up-and-running-on-a-raspberry-pi-2-1689869367
If the pi is so much slower than the black why cant you add this emu to bsnes? Its what your users want and it is easy to add a controller with more buttons.

RetroPie fan posted:

I have no idea why you won't listen and just copy retropie for the bbb. It already works and supports a lot of platforms BeagleSNES does not. If you want to reinvent the wheel and waste all your time go ahead I guess, but that is being stupid on your part.

... and so forth. Open source software development is very rewarding! :v:

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

hendersa posted:

... and so forth. Open source software development is very rewarding! :v:

Why won't you listen!?!!?

hendersa
Sep 17, 2006

taqueso posted:

Why won't you listen!?!!?

RetroPie is very good for what it does, but it is also the bane of my existence because a lot of people compare my work to that project and then call me an idiot because it is clearly very easy to include everything and the kitchen sink because that is how RetroPie does it.

There are hundreds and hundreds of posts on forums all over where people are comparing notes on emulation settings to get the various RetroPie emulators running the best on the RPi, trying to figure out how to get their Raspbian distro to include and load the right kernel modules for their controllers, fighting with the ALSA mixer to get the volume correct, and all sorts of other problems that are very Linux-y in nature. My goal is to eliminate that effort and give them an appliance that "just works" when they plug it in. It doesn't have to do everything, but the few things it does it should always do really, really well and in a uniform way. People just aren't into that quality-over-quantity philosophy, and they get all worked up when I inform them that including tons of stuff just because I could add it into the codebase isn't what my project is all about. It may make it in there someday, but I'm too busy fixing bugs, adding features to existing stuff, and hacking on the kernel.

For the really aggressive critics, I point out that my source code is available on github and that I'd be happy to accept new features in the form of pull-request patches with good documentation. That is when the mails from those people usually stop coming.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
I'm shocked, shocked that people who download a system for playing pirated old games for free are self-entitled dickbags.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
and hey, since everybody's doing it, might as well throw in my lovely raytracer demo I wrote from scratch tonight. who likes spheres?



http://magcius.github.io/rtx.js/rtx.html

I'm quite sure the shadowing is 100% wrong, but it looks sort of OK, so???

might freeze your browser for ten seconds while it actually does the thing

I don't know why or how I picked up the habit of terrible short variable names, but everybody in the graphics space does it. So yes, my code is hot garbage

https://github.com/magcius/rtx.js/blob/gh-pages/rtx.js

Dred_furst
Nov 19, 2007

"Hey look, I'm flying a giant dong"

steckles posted:

Looks like quantization error in your display code. You're probably multiplying your computed colours by 255 and then sending them to your display surface. Look into dithering, it'll help solve this issue.

Anyway, it good to see another goon get into raytracing; It's a deep and fascinating field. I recommend getting a copy of Physically Based Rendering: From Theory To Implementation if you want to go further. It covers just about everything you'd want to know about writing a sophisticated raytracer.

Thanks for recommending this, I'll be spending the next week reading this, it seems really comprehensive so far (and weighs a lot for a book)

piratepilates
Mar 28, 2004

So I will learn to live with it. Because I can live with it. I can live with it.



Suspicious Dish posted:


I don't know why or how I picked up the habit of terrible short variable names, but everybody in the graphics space does it. So yes, my code is hot garbage

https://github.com/magcius/rtx.js/blob/gh-pages/rtx.js

I think it starts with people using short names to represent mathematical notation and it just grows from there.

I looked through the source of some of the scenes on shadertoy.com a while ago and it was an extraordinaire pain trying to reverse engineer what someone has done from one letter variable names closely bunched together. But that's probably the demo scene for ya.

Mellow_
Sep 13, 2010

:frog:

Baloogan posted:

Space Empire!

http://spaceempire.baloogancampaign.com/






My stupid moon:


EOFS + Aurora = timesink.

Using this project to teach myself a bunch of web technologies:
- jQuery
- AngularJS
- LeafletJS
- SignalR
- ASP.NET MVC 5
- Entity Framework
- Leaflet.Hexagon

Using signalr and angular I aim to make a 'Single Page Application' that updates when other players do things in game.
Wait wait wait correct me if I'm wrong but you're using Aurora for the universe generation stuff etcetc...

If so, that's awesome! I absolutely adore Aurora.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Messing around with procedural world generation: Trying to make something that can generate worlds similar to RTS campaign maps, with clearly-defined navigable area, lots of variation in space openness and chokes. It's a bit unusual in that the basic structure of the playable space (the sand) is generated first and constrains the rest of the generation process.



Next up: Figuring out how to do the spatial analysis necessary to place objectives/NPCs in the map, and fix up some pathological cases like tiny strips of unnavigable terrain.

I also need to find a way to make the sand dune filter suck less, but I think doing that is going to require a nasty hack.

edit: Looks like Unity's real-time GI still requires baking all of the transport in the editor and can't do it at runtime.

That's OK. I'll just turn it of and write my own runtime GI solver. :suicide:

OneEightHundred fucked around with this message at 10:42 on Jul 5, 2015

Adbot
ADBOT LOVES YOU

bgreman
Oct 8, 2005

ASK ME ABOUT STICKING WITH A YEARS-LONG LETS PLAY OF THE MOST COMPLICATED SPACE SIMULATION GAME INVENTED, PLAYING BOTH SIDES, AND SPENDING HOURS GOING ABOVE AND BEYOND TO ENSURE INTERNET STRANGERS ENJOY THEMSELVES

AuxPriest posted:

Wait wait wait correct me if I'm wrong but you're using Aurora for the universe generation stuff etcetc...

If so, that's awesome! I absolutely adore Aurora.

Not sure how that'd work. It's not like there's an Aurora API. He might be being inspired by Aurora (as evidenced by his recent post history and the discussion of the thing in the Grognard Games thread), but there's plenty of examples of stellar system generation code out there, which are what Aurora's generation routines are based on anyway.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply