Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Synthbuttrange
May 6, 2007

Why yes, we do drive on the left side of the road here.

Adbot
ADBOT LOVES YOU

EoinCannon
Aug 29, 2008

Grimey Drawer
That's a stylish bus Orange. The cel shading works nicely

I've been stupidly busy at work for the last month or so. A director we know had a concept for a kid's TV show that he wanted to sell so he got the company I work for to make an intro and teaser sequence to take to MIPCOM in Cannes. This basically boiled down to myself and the one other staff member taking direction from him and creating the whole thing.
It was a nightmare from the start and I'm ashamed of how it came out but it's a story for another time.

The show's website

Because of working massive overtime I couldn't work Cohen the Barbarian. But since finishing that project I've got to work on him again.
I could be a lot happier with how it came out and considering it was supposed to be a modeling excercise, I spent way more time doing everything else. :doh:
On the plus side I learned more about mental ray, hair and comping in fusion

The next project I start will definitely be more of a pure modeling challenge. No texturing or messing around with hair and poo poo. BigKofJustice suggested an animal so maybe that's the way to go next.



Super large version

http://www.eoinjcannon.com/Cohen_Web.jpg

SGT. Squeaks
Jun 18, 2003

Two men enter, one man leaves. That is the way of the hobotorium!
drat, that turned out awesome man. Great portfolio material.

GFBeach
Jul 6, 2005

Surrounded by wierdos
I'm trying to rig a robot-y character and I'm stuck. Below is a diagram of the joint chain as it is:


Click here for the full 640x480 image.


Just in case it's not clear, here's the goal: Joint 1 should only rotate on Z. Joint 2 should only rotate on Y. Joint 3 should only rotate on X. Joint 4 should only rotate on Z.

I can replicate this perfectly in FK, but I can't seem to get IK to do anything remotely like this. I've tried either disabling the unused rotation axes or otherwise limiting their degree of rotation, and it usually just locks out all rotation from joints 1 through 3. I've heard something about building it like a reverse foot with multiple SC solvers for the IK handlers, but every time I try figuring out how exactly to set that up... :psyduck:

Anyone got any ideas?

EDIT: Added more background info.

GFBeach fucked around with this message at 04:59 on Oct 8, 2009

tuna
Jul 17, 2003

Is that even sensible to try and do that with an IK solver? The solver will literally throw a poo poo and not know what the hell to do. (Seriously I think it would be very unstable, if possible.)

You should be using a much simpler IK chain that drives nulls driving the parts of the mechanical mesh to do what you want. Direction and orientation constraints for the parts that rotate on an axis not aligned with a bone.

Synthbuttrange
May 6, 2007

EoinCannon posted:

The show's website

This is unbearable!

Also Cohen looks awesome. Maybe you should think about converting the hairs to sculpts and getting him printed out in 3d?

GFBeach
Jul 6, 2005

Surrounded by wierdos

tuna posted:

You should be using a much simpler IK chain that drives nulls driving the parts of the mechanical mesh to do what you want. Direction and orientation constraints for the parts that rotate on an axis not aligned with a bone.

That worked perfectly! Thanks! :woop:

pistolshit
May 15, 2004

EoinCannon posted:






The one thing that jumps out to me is that his beard looks like a tail. The hair doesn't seem to be coming off of his chin, but instead it looks like it's growing out of the beard. Especially as you get further away from the face.

Otherwise, it's drat good. Especially like the boots and the lighting.

EoinCannon
Aug 29, 2008

Grimey Drawer
Yeah, I agree about the beard
My problem is that 3DSMax hair and fur seems to have a limit on how many segments a strand of hair can have. So if I have long continuous hairs, they dont have enough segements to look like wiry beard hairs, so I had to hack it a bit. If I just ran the hairs exactly how a real beard would go, it looked all smooth and weird and I couldn't get the shape I wanted.

Actually the whole process of getting the hair to work in mental ray with the fast rasterizer and matte materials was a complete nightmare. I probably spent more time just on the hair then everything else combined.

This was my starting point for the hair
http://mentalraytips.blogspot.com/search/label/hair
But try using that method with Mental Ray's own matte material. Crashes every time. This makes the whole technique almost useless if you want to composite hair. I bascially had to render the hair out and then render another separate pass just to get a useable alpha. For the amount of hair I wanted to render though, the fast rasterizer was kind of the only option without waiting 10 hours for a render.

Big K of Justice
Nov 27, 2005

Anyone seen my ball joints?
How is the beard done?

The method I used in the past was cloth patches which are deforming hair guides which are grouped up and controlling different fur/pelt objects. That last part is important otherwise the guides may swing naturally but the fur/pelt that it's controlling will not split or part, it'll just fill in all the gaps in motion, so you wind up requiring several different overlapping fur/pelt objects controlled by different guides.

Maybe you can map a high detail segment to a lower resolution segment? I've never worked with fur in very many packages so I'm not sure how it works in max.

Big K of Justice fucked around with this message at 02:02 on Oct 9, 2009

EoinCannon
Aug 29, 2008

Grimey Drawer
The beard is 2 pieces. Shorter hairs that are coming out of the face and basically a long tapered cylinder coming out of his chin with hair growing from it, that's why it looks like a tail, it pretty much is a tail. I probably could make it a little better by growing the hair in shorter segments but following the overall beard path. Or using a plugin for hair, but they all seem to come with their own set of bugs :(

tuna
Jul 17, 2003

Great rendering, EoinCannon. Love it when a character is all polished like that. Have to agree that the lighting on the lower half of the beard makes it look like a tail, but otherwise it's a drat fine piece, well done!

Well this is the same goddamn character I've been posting in these threads for about 15 years now, but things took a hit when I got a job, then got laid off, picked up progress on it, then got another job (at Blur, woop!) and things died down again for a while.. Until recently.



Slightly older version

Slightly racist version.

Next up I really have to some shading and minor weighting tweaks and a face rig, develop the short film, etc. (you know the drill)

-A n i m 8-
Feb 5, 2009
What is the best software to go from DVD's and in to premiere pro 2.0? It's reel update time...

Big K of Justice
Nov 27, 2005

Anyone seen my ball joints?
Mac? I use Mac the ripper and mpegClipper to pull footage/chapters from dvd's and convert to .dv or quicktime and edit in a editor.

For the PC? Not sure.

Back in the good ol' days when you can get shot break downs from work in any format you want. These days you have to wait until the film is out on home video before footage is released and its final shots only.

So at that point, why bother? Grab the dvd.



butterypancakes
Aug 19, 2006

mmm pancakes
MPEG Streamclip

Big K of Justice
Nov 27, 2005

Anyone seen my ball joints?

butterypancakes posted:

MPEG Streamclip

Actually yes, that was the name of the program.. :doh:

-A n i m 8-
Feb 5, 2009
MPEG Streamclip rocks, thanks guys!

cubicle gangster
Jun 26, 2005

magda, make the tea
Eoin - props on the cgtalk result :)


I'm looking into some low poly/real time work and may be in need of some tutorials.

Basically I want to take our high poly, textured models - build a low poly mesh around it, use flatiron for the unwrap (because I am poo poo at it) and then render a diffuse with light, normal map & maybe specular. We're going to look at throwing them into unity or cryengine then, see how it goes.

Is this the best way to do it, am I missing anything, and does anyone know of any tutorials which cover the high poly -> low res model step? I dont actually know how to do that... just that it can be done.

I've done render to texture before but only for the same object - I dont know how to get one object to render another one to do the translation, if anyone follows. I dont know where to start and all the tutorials I found for it are for zbrush.

cubicle gangster fucked around with this message at 16:24 on Oct 12, 2009

ceebee
Feb 12, 2004
You could decimate it in ZBrush (resulting mesh may not be appropriate for animation) or bring it into Max/Maya/Topogun/3DCoat for retopology. I believe Max has some good scripts that attempt to automate the polycount reduction, but you'd have to check scriptspot or some other resource on more info. Once you have the low poly you can bake all the high poly stuff into reasonably sized maps for a real time engine, and then apply them all to the low poly. There's programs that make the baking much easier like xNormal

I would apply for the Topogun beta if they are still accepting people, it's a pretty speedy program and you can churn out some good topology in no time. ZBrush also has manual retopo tools, but it's a little picky about when it wants to work properly after you get the model more than 50% done or so, in my experience.

Topogun handles proxy meshes (your high poly) up to 1-5 million from what I've experienced. ZBrush is also pretty good at handling large amounts of polygons, you could take it into there to use decimation master and then throw it into another favorable program to retopo it.

ceebee fucked around with this message at 17:25 on Oct 12, 2009

DefMech
Sep 16, 2002
Is this the kind of thing you're looking for?
http://www.poopinmymouth.com/tutorial/normal_workflow.htm

cubicle gangster
Jun 26, 2005

magda, make the tea
The topo side doesnt really bother me - theres not too much to remodel/simplify, it's more the process going from the high to low version.
xnormal and the ben mathis tutorial look like exactly what I need though, thanks!
Topogun might come in useful later on too, we're just keeping it simple for now.

This might be a stretch, but if we've got an object with vray displacement on it and want to bake the normal map, does it just take the high poly version or will it take the disp too?
No biggie if not.

Should be getting a flatiron license in the next couple of days, i'm quite looking forward to this :)

Sigma-X
Jun 17, 2005

cubicle gangster posted:

The topo side doesnt really bother me - theres not too much to remodel/simplify, it's more the process going from the high to low version.
xnormal and the ben mathis tutorial look like exactly what I need though, thanks!
Topogun might come in useful later on too, we're just keeping it simple for now.

This might be a stretch, but if we've got an object with vray displacement on it and want to bake the normal map, does it just take the high poly version or will it take the disp too?
No biggie if not.

Should be getting a flatiron license in the next couple of days, i'm quite looking forward to this :)

It'll take the displacement. Whatever data exists at rendertime is the data it will project, so if you had an animated displacement map you could render it out at any frame, etc.

For what you want to do, build a low poly mesh, sit it exactly on top on your high poly mesh, set up some UVs (either totally unique, or with the the 2nd-Nth duplicate/mirrored poo poo moved out beyond the 0-1 window in a full unit, IE put it in 1-2.).

Next, put a projection modifier on the low poly mesh, add the high poly poo poo (max won't add groups you have to open all your groups first it is a cock sucker) to the project modifier, adjust your cage (by default it will generate a horrible mess, reset that, use the push slider, adjust some verts manually if need be).

Once thats set up, with the low poly selected, open up the render to texture menu, make sure "use projection" is checked, make sure the UVs are correct (sometimes it will want to autounwrap them instead of using existing) and add the various maps you want to render out. Make sure you set up paths for them - the render window won't show them. Then mash the render button and watch as it renders out a complete map in the window.

Check your files, realize you hosed up your project, and repeat :)

International Log
Apr 3, 2007

Fluent in five foreign tongues!
Grimey Drawer
First renders of the new job!

park:


Click here for the full 1024x683 image.


train station:


Click here for the full 2000x1000 image.



Click here for the full 2000x1000 image.


hobby stuff:

lamp:


Click here for the full 1920x1080 image.


pillow (cushion?)


Click here for the full 960x540 image.

cubicle gangster
Jun 26, 2005

magda, make the tea

Sigma-X posted:

Then mash the render button and watch as it renders out a complete map in the window.

I cannot get this to work at all - i've followed ben mathis' tutorial and gone through your post a few times now... It's just rendering out the low poly object. Only with the parts covered by the high poly object occluded.

Cant for the life of me figure out how to get the high poly object to show up.
Here's what i'm doing: (dont laugh at my uv layout, it was automatic...)



The low poly couch has the yellow material on it - the high poly is fully textured. The normal map is also blank, same if I change to scanline.

edit:
http://www.chaosgroup.com/forums/vbulletin/showthread.php?t=47247&highlight=projection+modifier
Nevermind :(
Projection doesnt work in vray. So we'd need to convert everything to scanline to bake the normals, then convert it all back to vray - retexturing all the objects, to bake the lighting/maps.

gently caress :(

edit: so I did it that way anyway, just to see how hard it'd be.


260 polygons!!!

It looks like total poo poo and it's a lot more awkward than I thought it'd be, but it works :)

cubicle gangster fucked around with this message at 14:42 on Oct 13, 2009

Sigma-X
Jun 17, 2005
huh, didn't know it didn't work with vray. Of course, we don't use vray here so it never really came up :)

You can use mental ray if thats better than scanline.

If you're going to bake normals for dynamic lighting and lightmaps, bake ambient occlusion maps instead of lightmaps (requires mental ray). Baked lightning and normals together looks really weird most of the time.

Also I'm not sure what your target platform is but if you're trying to make a virtual walkthrough of an architectural scene in whatever engine on PC you could go a lot higher polycount and get a better bake. 260 triangles for a couch is like Wii specs for a couch belonging in a full level.

DefMech
Sep 16, 2002
I was going to say the same thing about polycount. The real bottleneck is shader complexity and textures. I've used full res evermotion furniture in unity before with no performance issues. It was a fairly simple scene and didn't have anything as dense as that couch, though.

cubicle gangster
Jun 26, 2005

magda, make the tea
I know it's either normals or lightmaps - dont know what we'll be using. It was just a really quick test to go through the whole process, see how long it'd take for costing/etc. We got offered a RT job a while ago but as we couldnt cost for it for poo poo we had to let it go.

If we use cryengine, i'll do normals - if something like unity it'll be lightmaps. Fortunatly we dont need to use vray to bake the lighting if we do normal maps - and if we're using unity then we probably wont need the projection modifier.

edit: yeah the polycount. i've never made a low poly model before, so I just roughed one out really quick for the cage. It is a bit silly, haha.


International log:
That park has came out really well. Recognise those trees too, haha.
Not a fan of the people on the station - they take too much away from it I think. If you took them off, did some heavy stylish photoshop work on the render and sat just a couple of photoreal people in there it might work quite well.
I prefer images where instead of using the people to make it look like a concept, it lets the image do it so the people dont get in the way... Although i'm not a client and they tend to want shitloads of people.

cubicle gangster fucked around with this message at 18:53 on Oct 13, 2009

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

International Log: Nice stuff!
Where did you get the people and how did you composite them?
Did you have to match the lighting of the stock photos?
I agree with cubicle that there are a bit too many people on the station one.

I always thought lightmaps were occlusion maps but I guess I was wrong. What is the difference?

tuna: I really want to see that with materials.

Thanks for the MPEG Streamclip guys!

sigma 6 fucked around with this message at 23:52 on Oct 13, 2009

DefMech
Sep 16, 2002
Occlusion is just the kind of effect where it darkens cavities and areas where objects meet. Lightmaps are for baking light(highlights, shadows, color...) into the texture.

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

Thanks. How does the lightmap get fed into the material? I have never heard of Maya doing this, so is this just a Max thing? There is a "lit and shaded color map" in the transfer maps menu so maybe that is it . . . hmm.

I thought I had read somewhere that a lightmap was just an occlusion map which got multiplied over the diffuse color texture. Guess I was wrong.

Sounds like it is lighitng information which would be screened over the diffuse color texture (?) Or does one feed it into a certain attribute in the shader?
How would this account for movement? If the character is moving through lights, or if the lights themselves are moving?

Is there some good standardized resource for this?

Synthbuttrange
May 6, 2007

So, I've got to get an A1 render out of max for a print. That's 9933 x 7016 if I'm going up to 300dpi, but Max doesnt seem to handle images that big. How do I manage this render?

edit:
Problem #2

I've been looking at this tutorial over here

The thing that's stumping me is that I've never seen the three rollouts under the Lights, Maps, Fresnel settings! Where the hell did those come from? Have I just been missing a huge important part of Max all this time?


edit:I'm an idiot, it's for a plugin.

Synthbuttrange fucked around with this message at 14:25 on Oct 14, 2009

DefMech
Sep 16, 2002

sigma 6 posted:

Thanks. How does the lightmap get fed into the material? I have never heard of Maya doing this, so is this just a Max thing? There is a "lit and shaded color map" in the transfer maps menu so maybe that is it . . . hmm.

I thought I had read somewhere that a lightmap was just an occlusion map which got multiplied over the diffuse color texture. Guess I was wrong.

Sounds like it is lighitng information which would be screened over the diffuse color texture (?) Or does one feed it into a certain attribute in the shader?
How would this account for movement? If the character is moving through lights, or if the lights themselves are moving?

Is there some good standardized resource for this?

In a very general sense, when you render the scene, it renders directly into a texture for the geometry instead of a normal rectangular render window. The process is supported by both Max and Maya, among others.

http://kokosovar.free.fr/lightmap.jpg <- first example I found.

You can either render the map already blended with the diffuse or render the lighting information separately, which is then overlaid on a diffuse in your shader. The second way is used when you've got a tiling texture. The geometry has 2 UV sets, one for a tiling diffuse and another for lighting. You can save a lot of space since the lighting information doesn't need to be as sharp as your diffuse.

You don't really account for movement, baked lighting is best used for static geometry.

Das MicroKorg
Sep 18, 2005

Vintage Analog Synthesizer
A quick newbie question concerning XSI Hair Dynamics:
I've got the head of a Teddy Bear as a low-poly mesh, which I subdivided twice. I then added fur to it, setting the Hair Generation Operator to 2 subdivs, and all is great. Next I want to add Dynamics to the hair, but as soon as I do, the hair looks like it's applied to the un-subdivided mesh again, but only when I render the scene - in the viewports everything is still great any dynamic. When I mute the Dynamics Operator it's fine again, but obviously without dynamics.

Am I doing something wrong here, or is this a bug?


EDIT: By subdivisions I mean Geometry Approximation subdivisions

Das MicroKorg fucked around with this message at 16:27 on Oct 14, 2009

International Log
Apr 3, 2007

Fluent in five foreign tongues!
Grimey Drawer
Thanks guys, and yeah i agree on the people. Not my choice on where to place em :(

Hinchu
Mar 4, 2004

Please keep a watchful eye out for hinchus. They are very slow and dumb, and make for easy roadkill.
A quick update on the CG backdrops for the shark exhibit.



We had a soft opening for the left ramp this week as we are starting to work on the other side. I can't wait to get all of the other graphics and exhibit items into the ramps! The rest of the backdrops are going up next week.

Edit: And also I should mention that there is water effect lighting on the walls.

Hinchu fucked around with this message at 21:28 on Oct 14, 2009

EoinCannon
Aug 29, 2008

Grimey Drawer
That looks cool Hinchu.
It came out nicely
The lighting and water effects add heaps to it also.

Sigma-X
Jun 17, 2005

sigma 6 posted:

Thanks. How does the lightmap get fed into the material? I have never heard of Maya doing this, so is this just a Max thing? There is a "lit and shaded color map" in the transfer maps menu so maybe that is it . . . hmm.

I thought I had read somewhere that a lightmap was just an occlusion map which got multiplied over the diffuse color texture. Guess I was wrong.

Sounds like it is lighitng information which would be screened over the diffuse color texture (?) Or does one feed it into a certain attribute in the shader?
How would this account for movement? If the character is moving through lights, or if the lights themselves are moving?

Is there some good standardized resource for this?

This is a game engine thing (usually).

Rendering out lightmaps is done for static objects, and is frequently handled in an engine - Source, UT3 both handle this. I don't believe Cry Engine does because I believe they do fully dynamic lighting for their environments, but I might be wrong.

Typically you build a new set of UVs (usually the second, but depending, again, on the engine/object/etc) that has all the faces in an area/map/etc uniquely unwrapped. Then you render out the lighting information to this map, and multiply it down over all of the objects, so a full-lit image will show the diffuse clearly, etc. There are methods to get this to work with normal maps, etc, but this is static lighting solution. You could use the regular UVs for an object if it is unwrapped uniquely, but that is rarely a good idea since a fully unique unwrap (which lightmapping requires unless your lighting is mirrored/tiled as well) is usually a bad idea for environment assets. Additionally, you typically want to keep the lightmaps as few and as small as you can.

There are animated light maps/projectors that can be used for certain effects as well - in the lightmap posted earlier, those fans could be made to look like they were rotating by animating the parts of the lightmap that they are in.

Baking lightmaps winds up costing memory, as you have another texture call, but it is very fast for lighting and allows you to get complicated lighting effects like Ambient Occlusion, reflected light/radiosity, etc, since its all rendered out offline.

Lightmaps are old tech but they're still very useful. However, with how many games are doing dynamic lighting nowadays (open world games with day/night cycles, destruction-based games like [shill]Red Faction Guerrilla[/shill]) they aren't as useful as they used to be.

The "lightmaps" you see people rendering out that are AO bakes being overlaid on the diffuse aren't providing real ambient occlusion, either, but thats another topic and I think its been covered in this thread before.

sigma 6
Nov 27, 2004

the mirror would do well to reflect further

Thanks DefMech and Sigma X:

It seems that AO should be multiplied over AC (ambient color) and then directional light added. At least this is how I understand it from Zap's blog on the subject. Most people still multiply the AO over the diffuse color then tweak the levels and opacity and so this is why I was confused. I assumed the AO pass was called a "lightmap" in video game terms.

Some questions:

Why would you need separate UVs? Why not use the same UVs as you would for any other map? The geometry should have the same UVs for diffuse color, normal maps etc. Why would lightmaps be different?
Sounds like this is just lighting baked into a texture map so you get things like bounced light and static lights added to the texture. How does the engine know how to do this? Does that mean the shaders apply it in the engine and how? Sounds like any engine which supports dynamic lighting would make lightmaps moot or at least not so important.

Does Unity support dynamic lighting? I am looking forward to finding out how this works once my school gets it.

Is there a good guide for this stuff?

sigma 6 fucked around with this message at 05:12 on Oct 16, 2009

DefMech
Sep 16, 2002
Having a separate UV set keeps you from having to use unique textures for absolutely everything. If you bake the light separately, you can use the same diffuse over multiple objects. Saves disk space and makes scene management simpler. I'm sure Sigma-X could give you a more real-world example. Lighting also doesn't always need to be as highly-detailed as the diffuse, so you can map it differently and use a lower-resolution texture.

Unity does have fully dynamic lighting if you want to go that route. I'm hesitant to make this comparison because it's not a perfect parallel, but as of right now, Unity is probably most easily compared to Unreal Engine 2.5 as far as rendering goes. When 2.6 comes out(before the end of the year), they're going to have true floating point render textures and better support for multiple render targets bringing it right up to the rendering capabilities of other modern game engines(deferred rendering, real-time GI, etc).

I'm actually doing a game in Unity right now and I'm using no real-time lights. Everything is baked since it's for the web and needs to run on a huge spectrum of machines.

Here's a guide for lightmapping in Maya for Unity: http://download.unity3d.com/support/resources/files/LightMapTutorial.pdf

It also explains, in context, why you want to use a second UV set for lighting.

Adbot
ADBOT LOVES YOU

AuntJemima
Jul 22, 2007
Ah today I was looking at some of the previews for FF13 and drat it looks cool. Especially the art and CG for Shiva drat..
Anyways ever since I discovered editors for games in like grade 3 that is basically what I would mostly do with my time. I find creating things much more fulfilling and fun then playing games. Of course I still love to play but creating is just a cool feeling when you can say you did it yourself.

Anyways this seems like a good place to ask about the school I'm applying to and will hopefully get into. Just wondering what your take is on it based on the demo reels and course outline.

http://www.thinktanktrainingcentre.com/

Out of the places that offer video game art and design in my area this seems to be the best one I've looked at so far. I went on a field trip there in grade 12 with a couple people and my video game design teacher and he thinks its the best one too. Just want a second opinion.. Don't want to end up in a school like Colin's Bear Animation!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply