|
This might be general to the point of meaningless, but say you have three shader algorithms - one that just applies colour, one that blends a texture and a colour, and one that blends two textures and a colour. Is it generally better to have a single program that takes two textures and a colour, and somehow works out what needs to be used (say with non-zero blend parameters), or is it better to have three specialised programs and switch between them as necessary? How about if there's only one texture at most? The talk of cost in switching the state machine around is making me wonder if it's better to put some logic in the shaders, especially when it's fairly basic. This is only a few quads (<50) in ES but I'm doing some generalisation, and if it's worth changing this while I'm at it then I might as well!
|
# ? Nov 4, 2014 23:54 |
|
|
# ? May 15, 2024 04:15 |
|
I'm just starting out my CG career but I gotta say: man the inconsistency in coordinate systems across platforms is annoying. Right now I'm working with a few different programs that have varying conventions as to whether Z or Y is up, and which way "forward" is. Is there a list of equations to go from and to all the possible orientations? I know I'll have to work them out by hand but
|
# ? Nov 5, 2014 18:29 |
|
brian posted:So I was wondering why I have to add the +0.01 to the x of the tex2D call, if I have it without the +0.01 it misses one of the colours and everything is indexed wrong and I suspect if when I make it support multiple lines of palettes per file for power of 2 textures and whatnot, i'll have to add the same to the y. Any help would be fab. Really not quite sure since I haven't given it a good look yet, but remember that things are 0 indexed so you might have to use (length - 1) or (width - 1) somewhere. It sounds like that could be the issue based on past experiences. Edit: Oh! Also, sampler lookups are on texture edges, not their centers. You have to add half of the width of a pixel to the coord to get the center of the pixel. Something like: curPt -= mod(curPt, (0.5 / vec2(imgW, imgH))); Jewel fucked around with this message at 13:08 on Nov 6, 2014 |
# ? Nov 6, 2014 13:05 |
|
Jewel posted:Really not quite sure since I haven't given it a good look yet, but remember that things are 0 indexed so you might have to use (length - 1) or (width - 1) somewhere. It sounds like that could be the issue based on past experiences. Ah that explains a lot, does that mean if i'm having to subtract from the y by half a pixel it means that my y indexing is off? It's all so very hard to debug with these things, thanks a bunch though!
|
# ? Nov 6, 2014 13:46 |
Colonel J posted:I'm just starting out my CG career but I gotta say: man the inconsistency in coordinate systems across platforms is annoying. Right now I'm working with a few different programs that have varying conventions as to whether Z or Y is up, and which way "forward" is. Is there a list of equations to go from and to all the possible orientations? I know I'll have to work them out by hand but I, for one, am not really sure what you're asking? Are you working in different modelling applications or different graphics APIs? This distinction is fairly important. If it's APIs, I'm pretty sure OpenGL consistently has (0,1,0) as its camera up-vector and (0,0,-1) as its camera direction vector, regardless of OS or computer architecture. I'm not sure with Direct3D, but if you're working in both, you'd probably have to write two vastly different frameworks anyway. If you're trying to draw a model you've imported with your own code, it should be a simple issue of defining a (series of) rotation matrix/matrices that orients it correctly and just applying that to every model you get from that particular modelling application. If you're talking about different modelling applications, can't you just rotate whatever you're importing until it has the orientation you want? I haven't really worked a lot in modelling, but I don't imagine that'd be very hard.
|
|
# ? Nov 6, 2014 18:39 |
|
brian posted:Ah that explains a lot, does that mean if i'm having to subtract from the y by half a pixel it means that my y indexing is off? It's all so very hard to debug with these things, thanks a bunch though! I'd say get your math and put it into python or something and run through it on the cpu and check against manually calculated values to see if you're getting where you should or not (when you get the 0-1 value, multiply it by image width/height and check where that falls on the real image). Edit: vvv Oh jeez I forgot they didn't as well. Thanks for the reminder! Jewel fucked around with this message at 02:21 on Nov 7, 2014 |
# ? Nov 6, 2014 23:48 |
|
Jewel posted:I'd say get your math and put it into python or something and run through it on the cpu and check against manually calculated values to see if you're getting where you should or not (when you get the 0-1 value, multiply it by image width/height and check where that falls on the real image). Haha I finally worked it out, I was under the assumption that texture coordinates started in the top left for some reason, it worked fine with a texture with only two lines because it wrapped around laffo, it all makes sense now, thanks!
|
# ? Nov 7, 2014 01:48 |
|
This is probably the wrong thread, but Creative Convention didn't look promising: if I have a rigged skeleton in a .mb or .ma file, how do I get it out without having a maya license? I can get skeletons out of .fbx with the sdk, but maya looks tougher.
|
# ? Nov 12, 2014 15:06 |
|
fritz posted:This is probably the wrong thread, but Creative Convention didn't look promising: if I have a rigged skeleton in a .mb or .ma file, how do I get it out without having a maya license? I can get skeletons out of .fbx with the sdk, but maya looks tougher. It seems Maya's format is stuck inside Maya, and unless someone's written a conversion tool from .ma-> anything really, you're out of luck Though you can probably just use a student copy of Maya worst case, though it is kind of big; it's very easy to get. If you can get the file into a .3DS you can maybe use this http://usa.autodesk.com/adsk/servlet/pc/item?siteID=123112&id=22694909
|
# ? Nov 12, 2014 23:28 |
|
Jewel posted:It seems Maya's format is stuck inside Maya, and unless someone's written a conversion tool from .ma-> anything really, you're out of luck Yeah Going to try the free trial version tomorrow, I don't need anything fancy just the file export, and if I can do all the exports at once I can be done with it.
|
# ? Nov 13, 2014 03:34 |
|
.ma files are ASCII and have a spec (http://download.autodesk.com/us/may...umber=d0e677725). I think it has the same flavor of problems that FBX/COLLADA can have though: Basically everything you want is buried under multiple layers of functionality, it takes a good amount of work to dig the data you want out, and you have to convert anything that's in a format that you can't handle (which can potentially be very difficult). I don't know if .ma is any worse in that respect than FBX/COLLADA, but at least with FBX/COLLADA there are third-party converters already.
|
# ? Nov 13, 2014 04:29 |
|
It's a dump of Maya-specific features. There's a lot of random bitfields and flags in it with no explanation of the values, and the spec doesn't help.
|
# ? Nov 13, 2014 05:25 |
|
What's the best way to include shaders in a webGL program? All the tutorials I've seen so far just put the shaders directly into the HMTL but surely there has to be a better way? On that note, are there some particularly good webGL tutorials?
|
# ? Nov 17, 2014 00:20 |
|
I do:JavaScript code:
|
# ? Nov 17, 2014 02:48 |
|
Could put it incode:
|
# ? Nov 17, 2014 03:19 |
|
Is this a good thread for Blender related questions, particularly pertaining to cloth simulation and 3d animation/rigging? I got a mesh I got off the internet from Blendswap, it lacks an underlying "person" mesh, it has a face and arms and clothes but nothing underneath. Would cloth simulation on the clothes still work even without a character mesh to pin them to? There's arms and legs, maybe I should pin them to that? I just want to be able to do a simple walk cycle without the clothes looking stupid. Alternative solution: Is there any way to rig clothes so that the weighted vertexes squish together the lower their weight is? To better explain, suppose I have a character with a dress and I parent the dress to the leg armature with automatic weights, the more the dress is actually influence by the leg the more accurate I feel it is; but at the extremities(at the waist for example) they have this weird effect where the 'rim' of the clothing is also moved or deformed when I'd prefer the rim of the cloth to be unmoved and the vertices near it to be constrained by those fixed vertices. Any ideas? For the most part I feel cloth simulation may be overkill and also makes me cry as one of those things that just looks really difficult.
|
# ? Nov 17, 2014 03:43 |
|
If nobody can help you you might have more luck in the 3DCG thread, which is more about using the software and all that
|
# ? Nov 17, 2014 04:27 |
|
Suspicious Dish posted:I do: I can't imagine writing a shader that way. I guess I could write it in a separate file and then transition to that for deployment but in that case I might as well dump it in the html. Subjunctive posted:Could put it in I'm not sure I understand. Isn't this what I was talking about trying to avoid, or am I missing something here?
|
# ? Nov 17, 2014 20:33 |
|
HappyHippo posted:I'm not sure I understand. Isn't this what I was talking about trying to avoid, or am I missing something here? Sorry, I thought you meant with escaping or string literals. What don't you like about it? I'm not sure what you're looking for in a solution. You could put it in separate files and XHR to them if you wanted, too.
|
# ? Nov 17, 2014 23:07 |
|
Suspicious Dish posted:I do: That's exactly what Three.js does as well.
|
# ? Nov 17, 2014 23:30 |
|
Subjunctive posted:Sorry, I thought you meant with escaping or string literals. What don't you like about it? I'm not sure what you're looking for in a solution. You could put it in separate files and XHR to them if you wanted, too. Ideally I would like to keep them in their own files. XHR seems the way to go.
|
# ? Nov 17, 2014 23:37 |
|
Before you go reinventing the wheel, glTF seems to be the asset format for WebGL.
|
# ? Nov 17, 2014 23:46 |
|
baka kaba posted:If nobody can help you you might have more luck in the 3DCG thread, which is more about using the software and all that Thanks! I'll check it out.
|
# ? Nov 18, 2014 03:25 |
|
pseudorandom name posted:Before you go reinventing the wheel, glTF seems to be the asset format for WebGL. http://www.gltf.org/
|
# ? Nov 18, 2014 06:11 |
|
So I just posted this thing in the screenshot thread where I used webgl/shaders to plot the julia set (move the mouse around to see the psychedelic colors). At first I stored the real and imaginary components of z into the red and green channels in the texture, first by scaling to the range (0.0, 1.0), but that didn't offer enough precision. In an attempt to get more precision I tried using the remaining two channels to store an additional 8 bits like so: code:
Edit: Opps, I should have used 255 instead of 256. Problem solved! HappyHippo fucked around with this message at 19:29 on Nov 24, 2014 |
# ? Nov 23, 2014 23:03 |
I'm about to finish introductory graphics and rendering (two seperate courses) and we have to do a final project. Me and my mate decided to make a joint project for both classes about diffuse reflectance in real-time using the many point lights method with imperfect shadow maps. So far I'm fairly clear on what we have to do (with the exception of some details, but I already found reading material for most of it.) One thing I'm not quite sure about is how you do hemispherical shadow/depth maps. Projecting a point unto a sphere is intuitive enough, but I can't find anywhere that explains how to map an entire triangle to a sphere, so that its edges are mapped as well before they are rasterized for the final depth map. Are you just supposed to accept the approximation offered by mapping the vertices to the sphere and not accounting for edge warping? E: Come to thin of it, I guess we're already making a gross approximation with the ISMs, so approximating the hemisphere seems like the lesser of two evils there? At any rate, I still feel like I could use a bunch more reading, so if anyone has some links to some good resources on the subject I'd greatly appreciate them. Joda fucked around with this message at 01:48 on Nov 27, 2014 |
|
# ? Nov 27, 2014 01:37 |
|
HappyHippo posted:So I just posted this thing in the screenshot thread where I used webgl/shaders to plot the julia set (move the mouse around to see the psychedelic colors).
|
# ? Nov 27, 2014 17:24 |
|
High Protein posted:Can't you just use a two-component 16 bit per component texture format? How would I go about that? I would certainly like to. For texImage2D the as far as I can tell the allowed internal formats are ALPHA, LUMINANCE, LUMINANCE_ALPHA, RGB and RGBA.
|
# ? Nov 27, 2014 20:16 |
|
HappyHippo posted:How would I go about that? I would certainly like to. For texImage2D the as far as I can tell the allowed internal formats are ALPHA, LUMINANCE, LUMINANCE_ALPHA, RGB and RGBA. Hmm yeah, it appears that isn't possible in WebGL, sorry.
|
# ? Nov 27, 2014 20:41 |
|
OES_texture_half_float as an extension should let you use float16 values for each channel. I think it's pretty universal on desktop browsers.
|
# ? Nov 28, 2014 01:17 |
For deferred shading (OpenGL,) how do you get anything other than floats between 0 and 1 into a texture? I'm currently accounting for this discrepancy in my shaders, but had to go with a solution that seems very shady, where I exploit that all vertices in my scene are between -1 and 1 on all axes. Also, are there any easy ways to avoid for artifacts from position map imprecision when you generate your lightmap? I know multisampling the light map is an option, but performance is already an issue with what I'm doing. This is how the light map looks. The artifacts are most obvious on the small box in the front and the wall to the right. (The scene is Cornell boxes) E: I figured out the imprecision problem by encoding my scene information in RGBA16F. I still can't figure out how to get values over 1 or under 0 into the texture, though. Joda fucked around with this message at 16:04 on Dec 5, 2014 |
|
# ? Dec 4, 2014 22:10 |
|
Contains Acetone fucked around with this message at 17:33 on Jun 24, 2020 |
# ? Dec 6, 2014 02:29 |
Contains Acetone posted:Here you go for 32 bit floats and what-not: Thanks. Looks like I'd unknowingly fixed that problem as well when I increased the position map precision to 16-bit floats . Figured that I had to set a state somewhere to stop GL from clamping, so never thought to check it. Joda fucked around with this message at 03:27 on Dec 6, 2014 |
|
# ? Dec 6, 2014 03:25 |
|
SO, I've been trying to use GeDoSaTo (an upsampling post-processing tool for prettifying vidjamagames) and I noticed a bug with it: One of the post-processing options is a film grain effect. This looks pretty nice and all, but after the game has been running for a few minutes, it will lose its randomness and a set of rotating lines will become apparent amid the grain. Having looked through the code for the shader, I assume the problem is in the random texture generator function it seems to be using: code:
code:
I tried replacing one of the two components of the float2 vector with a fixed value, so code:
As you might have guessed from this post, I know nothing about HLSL; I've got only the most rudimentary of programming knowledge. I was wondering though if anyone could: A - find a solution for this, and if practical B - give me a cliff-notes rundown on what was going wrong and why.
|
# ? Dec 14, 2014 11:00 |
|
The_White_Crane posted:I was wondering though if anyone could: A - find a solution for this, and if practical B - give me a cliff-notes rundown on what was going wrong and why. Looks like a floating point precision issue. If tc is in [0,1] or so then tc + float2(timer,timer) is going to drop more and more significant digits of the seed value as timer gets large and the sine will only assume a few discrete values. What that function seems to essentially be doing is code:
I guess the solution is to use a better noise function. There are a bunch of perlin/gradient/simplex libraries around (a random one I googled), but I have no experience with any of them. If you're not familiar with shader languages they might be a bit of a pain, also. You can try to just replace the initial 3d -> 1d hash with something more well-behaved. code:
|
# ? Dec 14, 2014 13:33 |
|
Well, your first solution worked perfectly, so thank you for that. I'm glad I was at least guessing roughly where the problem was occurring, if not why. I must admit, I don't think I have the requisite background knowledge to follow your explanation though. I get the problem of dropping more digits as the timer value increases, and I think I follow how that would cause the sine to gravitate towards a smaller set of values, but I have trouble with the idea of mashing a 4d variable to a 3d one. When you have a 4d variable, for example, are the different dimensions any specific 'thing' or are they essentially arbitrary? Does the whole set of values evaluate out to a single number somehow? Honestly, until I looked at this stuff I hadn't encountered multidimensional variables before, and my vaguely remembered college maths seems to be inadequate to the task. Sorry, I didn't mean to start trying to pump you for a comprehensive explanation of vector maths. I appreciate your effort.
|
# ? Dec 14, 2014 15:55 |
|
The_White_Crane posted:Well, your first solution worked perfectly, so thank you for that. Glad that it worked, although I should perhaps point out that it's still not a very good noise function. I'll see if I can explain better. I'll start with what that code is supposed to be doing, then move on to how it works and finally point out where it fails. Re-reading this I realize it's far too long and I went in to way too much detail. Oh well, it was pretty fun to think about and I wanted to explain it to myself anyhow. Hopefully it'll also make things a bit clearer. What it is The intent of that code snippet is to produce 4 floats of noise: arbitrary float values between 0 and 1. A noise function is essentially a specific type of a hash function -- you send in some input and it produces a mostly arbitrary output. If you send in the same input twice you get the same output twice, if you change the input a little bit you get a completely different* output. I say this 3d to 4d, because the input is 3 floats: the x & y coordinate of the "td" value and the timer value. The output is 4 floats: the 4 rgba components. There's nothing particularly magical about the size of the input and output vectors, but producing more floats from less data tends to be harder. Each float is 32 bits so you're trying to produce 4*32 bits of arbitrary output from 3*32 bits of arbitrary input. There's no way to cover the entire output space. It's still possible to do 1d -> 4d noise, but you need to be a bit careful to make sure that your 32 bits of input get spread over your 128 bits of output in a "good" way. What generally happens when this fails is that you end up with a discernible pattern in the output, which is bad since the entire point of noise is usually to not have a pattern. In this particular case, we can try to look at how the code is trying to achieve it's random output. * In this case, where the intent is to be a random number generator (hence the rn in the name I guess). There are other types of noise that are (more) continuous, meaning that a slightly different input just means a slightly different output. What it does We'll start with the first line. float noise = sin(dot(tc + float2(timer,timer),float2(12.9898,78.233))) * 43758.5453; This takes the 3 input floats and tries to produce an arbitrary float. It's compact but if we expand the expression it's actually doing float x = (tc.x + timer) * 12.9898; float y = (tc.y + timer) * 78.233; float noise = sin(x + y); noise = noise * 43758.5453; We're taking the sine of x+y, something that's roughly the size of ~100*timer + 50*tc, which gets us a number between -1 and 1. Then we multiply that with 43758.5453 to get an "arbitrary" float between -43758.5453 and +43758.5453. Because we're multiplying with something large the idea is that if you shift tc just a little bit you'll be changing the sine slightly and, since we're multiplying with 43758.5453, shifting the noise value a whole lot. The intent is that if you just look at the fractional part then you are computing a "random" value for each pixel of the screen, for scenarios where this might be nice (like a full screen grain filter). This works fine for the "original" noise algorithm where you don't have any sort of time included, but here it's breaking down when the timer is large. We'll get to why in a bit. Finally, we're going to try to take our single large noise float and get 4 new floats between -1 and 1 out of it. float noiseR = frac(noise)*2.0-1.0; float noiseG = frac(noise*1.2154)*2.0-1.0; float noiseB = frac(noise*1.3453)*2.0-1.0; float noiseA = frac(noise*1.3647)*2.0-1.0; The frac function just strips the integer part of the float and leaves the fractional part, between [0, 1]. The idea is that if we multiply our large, arbitrary noise value by different values (the 1.2154, 1.3453, etc) then the fractional part is going to change significantly, and we get a basically random value. For instance, let's say the arbitrary noise float ended up being 1224.86. We'd get float noiseR = frac(1224.86)*2.0-1.0 = 0.72; float noiseG = frac(1224.86*1.2154)*2.0-1.0 = 0.389688; float noiseB = frac(1224.86*1.3453)*2.0-1.0 = 0.608316; float noiseA = frac(1224.86*1.3647)*2.0-1.0 = 0.132884; which, hey, looks pretty random. It's not, of course, and the multipliers here are actually really poorly chosen -- much better to go with values that aren't as close to oneanother so that small shifts in the noise value won't affect each component in the same way -- but it's good enough randomness for a convincing grain filter which is what we need. Why it fails Why is it breaking down when the timer is large? It's a fairly bad noise function, and noise on the GPU is black magic in the first place, so there are a couple of things that could go wrong. But let's look at the first line since that looks to be the primary issue: float x = (tc.x + timer) * 12.9898; float y = (tc.y + timer) * 78.233; float noise = sin(x + y);. A float is 32 bits, of which 1 is the sign, 8 are exponent and 23 are the significand/mantissa. Those 23 bits mean that you're working with 8ish significant base-10 digits. If you're adding two floats of very different magnitude then the less significant digits in the smaller one will be ignored; e.g. if you take float(100000) + float(1.000001), you can expect the result to be float(100001). If "tc" is some sort of screen space position in [0,1] and you compute "tc.x + timer" then any dropped digits from tc would mean that this value will be the same for regions of the screen. This only manifests as the timer value becomes large. How to fix it The change to computing float noise = sin( dot(float3(tc.x, tc.y, timer), float3(12.9898, 78.233, 0.0025216)) ) * 43758.5453; expands to float x = tc.x * 12.9898; float y = tc.y * 78.233; float z = timer * 0.0025216; float noise = sin(x + y + z) * 43758.5453; Basically, I'm trying to keep x, y and z at the same magnitude for longer to avoid that particular precision failure. It's still bad, though. It will fail in the same way eventually and the entire sine thing is really not a very good way to get an arbitrary float out of the three inputs in the first place. One obvious improvement is to add a frac to stop the timer part of the sum from getting large. float x = tc.x * 12.9898; float y = tc.y * 78.233; float z = frac(timer * 33.12789); // Now we want this to be "more arbitrary", hence the larger number float noise = sin(x + y + z) * 43758.5453; Xerophyte fucked around with this message at 17:28 on Dec 14, 2014 |
# ? Dec 14, 2014 17:20 |
|
Wow! Thank you very much! That's significantly clearer now, actually. I think the big problem I had was getting confused by the dot function; I didn't have much to do with that even when I did study maths (or I've forgotten it entirely) and I didn't understand at all how that was actually being fed its numbers in the code, because I'm so totally unfamiliar with HLSL syntax and functions. Thanks a bunch; it was great of you to go out of your way to educate me.
|
# ? Dec 14, 2014 19:51 |
|
Apparently the line thickness property in webgl might not work in all browsers - in some browsers all lines will have a thickness of 1. I guess the next best thing is to draw "lines" using quads. I was thinking you could probably do this with a geometry shader, but it seems like the kinda thing someone has probably already done. Google didn't turn up much, is there an implementation of that sort of thing out there already, or am I going to have to roll my own? Edit: oh I guess webgl doesn't even have geometry shaders, so much for that idea. Any other solution would be appreciated. HappyHippo fucked around with this message at 00:32 on Dec 17, 2014 |
# ? Dec 17, 2014 00:30 |
|
|
# ? May 15, 2024 04:15 |
|
Line thickness is one of the most annoying graphics things in terms of how simple it seems it should be sadly and there's a few papers on it, it's called "line stroking". Here's a normal opengl extension to do it that, I think, only works on nvidia: https://www.opengl.org/registry/specs/NV/path_rendering.txt And here's a paper on how it's done if you feel up to implementing it https://hal.inria.fr/hal-00907326/PDF/paper.pdf
|
# ? Dec 17, 2014 02:04 |