|
SupSuper posted:Nothing like shaders to hide your lack of artistic skills and make even boring spheres look like something else. Are you making a Tron clone? https://www.youtube.com/watch?v=BbBqPkdheFg
|
# ¿ Jun 13, 2013 14:28 |
|
|
# ¿ May 10, 2024 02:13 |
|
ijustam posted:I've owned this thing for 4 years and kept it through 3 moves and I finally got around to doing something with it. A giant, low-resolution thermometer/weather station display
|
# ¿ Oct 11, 2014 00:50 |
|
steckles posted:The floor and ceiling geometry is imported from .obj models that meet very specific requirements. Walls are determined by the sector edges and the difference in height between each edge and its neighbours, if it has any. I modelled everything in Wings and textured it "in-game". Do what modern GPUs do and output in a "block-linear" pattern. Instead of going horizontally (bad for reuse): WWWW XXXX YYYY ZZZZ Or vertically (bad for write combining): WXYZ WXYZ WXYZ WXYZ DO a combination of the both: WWXX WWXX WWXX WWXX YYZZ YYZZ YYZZ YYZZ (where the number of outputs you do along a given "row" is determined by your memory architecture. You can just experiment with it).
|
# ¿ Oct 14, 2014 20:54 |
|
Suspicious Dish posted:hey look it's a readable lisp Polish notation is never readable
|
# ¿ Apr 14, 2015 02:47 |
|
Superschaf posted:But if you reverse it, it all falls into place! This actually makes me wonder -- are there any languages that allow for "infix functions"? I ask because it occurred to me that "prefix" is basically what is used for all other function calls in all other languages, and I'm wondering if Infix is more readable to me just simply out of habit or of there's some other sort of ergonomics at play. Obviously C++ allows for a lot of operator overloading, but that's problematic for its own reasons.
|
# ¿ Apr 15, 2015 15:12 |
|
piratepilates posted:It's actually really simple and easy to start with strangely. Ray-Tracer on the back of a business card code:
Hubis fucked around with this message at 11:18 on Jun 16, 2015 |
# ¿ Jun 16, 2015 11:14 |
|
Avenging Dentist posted:Maybe you should try a nice filter instead. Or, more helpfully, a median filter.
|
# ¿ Aug 23, 2015 07:13 |
|
Jo posted:Much obliged. I'll give this a try. I'm still trying to grok the difference in their behavior. The median is the 'middle value' of the set (i.e. the one in the middle when all values are sorted based on some criteria) wheras the (arithmatic) mean is the average of the set. Since the median value will match one of the adjacent pixels, there will be no interpolation/blurring, so you'll preserve sharp edges in the data set. This is good if you're upsampling something from a limited palette and want the output to stay in that palette, for example. Of course you can only really do this if you have a reasonable way to sort the set to find the median in the first place (and that can be the tricky part for a median filter) but in your case it would be an easy implementation since the image is already black-and-white.
|
# ¿ Aug 24, 2015 11:17 |
|
evilentity posted:Working on smooth lights for box2dlights/libgdx. Left is smooth, right is default. Fun! That's a really neat and elegant solution for 2d. I've been working on something surprisingly similar!
|
# ¿ Nov 11, 2015 21:24 |
|
HardDisk posted:I always wanted to build a terrain generator. Do you have any recommendation on articles/tutorials? there's a ridiculous amount of professional AND hobby-level content out there, but this is as good a place to start as any: http://vterrain.org/Elevation/Artificial/ e: this, along with most of what Amit put up, is also great and probably more current (I couldn't find the link before): http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/ Hubis fucked around with this message at 22:55 on Dec 18, 2015 |
# ¿ Dec 18, 2015 22:49 |
|
lord funk posted:
Illumination is additive, so your output can be expressed as some linear combination of an arbitrary number of components. Usually this is Red/Green/Blue -- or, more correctly, whatever the primaries are for the color space you're operating in (which are monochromatic 700nm Red, 546nm Green, and 436nm Blue/Violet for CIE RGB). However, there's no reason you can't add more components. If they are chosen correctly, this allows you to have a much percentage of the perceptual gamut covered than the traditional triangular 3-component space would allow (as well as providing for probably more efficient lighting, since you need a lot more power in Blue wavelengths than Green ones for the same perceived brightness). You might also have more than three primaries because single wavelength illumination isn't the same as broad-band illumination, and so having more primaries lets you better model "full spectrum" lighting. The MOST CORRECT way to do it would be: 1) Determine the color space your "color values" are actually outputting. Your monitor is probably outputting sRGB, but it's quite likely that the stage lights have a wider gamut than that (for reasons mentioned earlier). This doesn't really matter unless you care a lot about how the colors on the screen map to the colors on the stage. Either the colors on the screen will be less colorful and have their hues clamped/compressed, or the colors on the stage will be limited to the sRGB gamut (and may be less vibrant than the light is actually capable of). In some sense you can kind of skip this step to start, but that just means you're making some implicit decisions/not worrying about the mapping from display to actual lighting. One option would be to do all your simulation in XYZ space (which uses imaginary colors) which will make step 3 trivial. To display on your monitor, just do an XYZ-sRGB transform first, clamping the final color values to the sRGB gamut. 2) Determine the color coordinates of each of your primaries in xyY space. You can start with educated guesses using an annotated CIE 1931 chromaticity diagram. Once you've got the system working, you can tweak them a little bit if the output seems biased to one shade or another. 3) Transform your sim output to xyY. If you had RGB values, you do this by transforming to XYZ first. This is just a matrix multiplication, with the transform matrix depending on your primaries (i.e. the color space). For sRGB, you multiply your input color by this: code:
code:
4) Compute the relative proportion of the primaries to produce the output Find a solution to the following linear equation: code:
The 'x' and 'y' values above are the coordinates for the primaries you determined in step 2. You want to solve for the various K values, which will be the relative brightness of each output. Since you're solving 6 variables over 2 equations, the results will be under-constrained, meaning you've got multiple sets of values that produce the same percieved color. This is what's called "Metamerism". Pick a heuristic to simplify the problem however you want. One option might be to compute the distance in xy space to each of the primaries, and then only solve for the furthest 1 and the nearest 2. 5) Adjust outputs based on desired luminance Scale the computed K values by the Y value from step 3. This is your brightness for each primary. Note that this might not technically be correct, depending on what your desired white point is, etc. Hubis fucked around with this message at 20:25 on Jan 12, 2016 |
# ¿ Jan 12, 2016 20:23 |
|
(multi-posting for clarity) That being said, a quick and dirty way to do it would be the following: 1) Convert your sim output to HSV 2) Determine the two primaries on the color wheel that your H value lies between. Hue values are as follows: Red: 0 Yellow: 60 Green: 120 Cyan: 180 Blue: 240 Magenta: 300 https://upload.wikimedia.org/wikipedia/commons/3/33/Hsv-hexagons-to-circles.svg 3) Determine the saturated color output based on Hue and the two primaries. If H were 30, for example, your saturated output would be: R = 0.5 Y = 0.5 G = 0.0 C = 0.0 B = 0.0 M = 0.0 If H were 195, it would be R = 0.0 Y = 0.0 G = 0.75 C = 0.25 B = 0.0 M = 0.0 4) Based on the Saturation value, determine the actual color output A color with 0 saturation would produce R = Y = G = C = B = M = (1/6) So using the S value, linearly interpolate from the unsaturated value (1/6) to the saturated color determined in step 3 to produce your final component proportions. code:
Take the relative component values from step 5 and multiply them by V to give you the final overall brightness.
|
# ¿ Jan 12, 2016 20:37 |
|
lord funk posted:You are awesome Hubis. Thanks - that was really helpful. Glad to help -- it's fortunate that I happened to just be doing a lot of reading on color spaces recently Another way of thinking of that second method, btw, is just thinking of the color space as a hexagon made of 6 triangles, with the pure colors at the edges and middle-grey at the center. Then you use Hue to determine which triangle you fall into, which gives you the three 'primaries' you want to use (middle grey, and the two saturated colors).
|
# ¿ Jan 12, 2016 23:53 |
|
lord funk posted:I'm totally with you there. I've got a minBrightness value and some scaling parameters in there to lesson the full on/off effect. Makes it a lot more natural looking. The piece we're playing is going to slowly ramp from choppy/bright water to steady/deep water over 30 minutes. I almost posted something about this, but figured it might not be a problem. It could be that the perceived brightness might be nonlinear to the numbers you're feeding to the system. One option would be to modify your 'white point' (the 1/6th middle grey) to be brighter, so that as you desaturate you're doing so to a brighter color. This still assumes it's linear, but might be enough to fix the problem. Note that you could also make the white point non-uniform (vary the brightness of different components) to tint the color somehow if you wanted. Another option would be to multiply your final brightness by a uniform saturation scale factor. In other words, something like code:
Finally, it might be some sort of gamma-like issue. You could try scaling all your individual components by a gamma factor before output: code:
This would amplify color components with low brightnesses, and leave high brightnesses mostly unchanged. This is akin to the gamma transforms you have to do when outputting a color to the screen when rendering in sRGB. Note that this would have the effect of reducing saturation somewhat, but might be more perceptually correct.
|
# ¿ Jan 13, 2016 20:30 |
|
Suspicious Dish posted:watch in amazement as this random noise becomes... a meme!! *trains a deep-learning net on an endless supply of internet memes* *back-feeds a white noise signal into the outputs*
|
# ¿ Feb 21, 2016 16:45 |
|
Baloogan posted:the network dreams of memes code:
|
# ¿ Feb 21, 2016 22:00 |
|
cross-post from the 3D graphics thread...Hubis posted:If anyone is going to be at GDC this year, feel free to swing by and check out my talk Wednesday afternoon: Fast, Flexible, Physically-Based Volumetric Light Scattering
|
# ¿ Mar 9, 2016 18:56 |
|
I just assumed it was a David Bowie reference
|
# ¿ May 3, 2016 15:42 |
|
Sex Bumbo posted:Ill make everyone feel better about themselves, here's a burger emoji that I rendered in the stupidest way possible Mods plz change my username to "Procedural Emoji" kthx
|
# ¿ Aug 16, 2016 13:21 |
|
|
# ¿ May 10, 2024 02:13 |
|
clockwork automaton posted:Been playing with using ragdoll physics on meshes in unity to animate So kind of like a super simple IK solver?
|
# ¿ May 22, 2017 16:55 |