Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ImplicitAssembler
Jan 24, 2013

DNEG's largest market *is* India and China and they're already doing like 80% of the shots....On Indian and Chinese movies.
MPC is MPC. It's always been quantity over quality.

I don't see Sony or ILM heading that direction, nor any of the Framestore companies.
Boutique studios are increasingly flourishing because they need the expertise and quality of Canadian/US staff.

Get out of that MPC shithole and you'll see a much brighter world out there.

Adbot
ADBOT LOVES YOU

Ccs
Feb 25, 2011


ILM opened in Mumbai and all their new Jedi Academies are are in Mumbai. Framestore is also increasing their Indian footprint in addition to their existing studio in Mumbai. My old department manager went to framestore and said every meeting is about how they can get more to India, and she has had to take on a team in Mumbai in addition to her canadian crew to manage.

EoinCannon
Aug 29, 2008

Grimey Drawer
Happy holidays thread

I made a garbageman santa, not sure why

https://i.imgur.com/yhrGbxu.mp4

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
:patriot: to all the people collecting our trash so we don't live in a dumpster.

Nice work!

Big K of Justice
Nov 27, 2005

Anyone seen my ball joints?
Nah, India isn't taking all the jobs, I've been hearing that boogeyman for 20-odd years.

Heck, I was at Rhythm and Hues which was 1000% pushing for that and couldn't make it work despite having 2 large Indian studios and the studio in Malaysia. The Malaysia thing was hilarious because the government was funding it, but it had to be all citizens, and R+H had to resort to hiring randos off the street, and throw them at training and hope some would stick. The attrition rate was 80-90% if I recall and the whole thing wasn't feasible. Plus Malaysia got a bit spicy towards the end of times there by demanding an editorial review of content of whatever R+H KL was working on.. as a vendor so you can imagine how that would have been impossible (Which was an insane demand since the work was for the US market).

India couldn't compete with Canadian Tax credits, overhead costs the same in India as LA, your only savings was labor [$2,000-80,000 a year was the payband 10+ years ago] but not time from my experience [the cheaper guys took way longer and communication sucked]. I remember John Hughes lamenting about not getting on board with Vancouver early enough, it was too late for R+H when they opened up the Vancouver office.

The really skilled Indian artists are logically going to go for the most competitive offers and often that means getting a visa and going abroad.

ILM may be expanding there but it's not going to be easy for them.

I knew Dreamworks animation gave up on their division there when Glendale artists ended up having to redo everything most of the time anyways.

Diabetes Forecast
Aug 13, 2008

Droopy Only
I am probably lightyears behind everyone else in this thread but I'm proud of finally making headway towards making my own animation projects happen.
I finished this over like 2-3 days while learning most of the Blender tools and playing with SpriteMancer (which I think was a bad idea, I need to get a better pixel art effects tool).
https://www.youtube.com/watch?v=yjLubLVcvQo
I'm still trying to figure out a not-suck method for doing particles in EEVEE but it looks like that I'll have a bitch of a time getting that to play nice no matter what. Anyone have resources on what to do with that?

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope

Diabetes Forecast posted:

I'm still trying to figure out a not-suck method for doing particles in EEVEE but it looks like that I'll have a bitch of a time getting that to play nice no matter what. Anyone have resources on what to do with that?

What exactly are you having trouble with?

Diabetes Forecast
Aug 13, 2008

Droopy Only

Wheany posted:

What exactly are you having trouble with?

Alot of this might be spotty documentation and how the community talks about things, which may be incorrect? I already know they have insane ideas on what sub-d modelling is supposed to be like, so I'm hoping maybe they're wrong here too.

What I keep hearing is that Particle Info node no longer works in EEVEE, and the only way to get most any particles working is to use Geo-nodes. I was kind of hoping I could just make particles the same way you do with game engines, but it seems like that's totally out of the question?
I can probably do stuff by using a bunch of show/hide with textures drawn to polygons or frame atlasing, (how I did the logo appearing in the video) but that feels a bit ridiculous to do when all I want is to just mimic older effect styles you'd see in PS1/Saturn/early 2000s PC games.

Elukka
Feb 18, 2011

For All Mankind
My impression is the particle system is horrible, not touched in ages, and meant to be replaced by geonodes. I'm not sure you can even reuse particle sims really because I think baking them locks the transforms too so you can't move them around. Not 100% sure on that.

I don't know what the status of geonode particles is and whether they're ready for use and I'm curious about that. I would love to have robust particles in Blender that can be reused and that I can do various things with, like change things over time, on collision, etc.

EoinCannon
Aug 29, 2008

Grimey Drawer
More holiday related modelling

Yesterday for sculpting and retop
Today for textures and some poses










Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope

Elukka posted:

I don't know what the status of geonode particles is and whether they're ready for use and I'm curious about that.

Blender 3.6 added simulation zones (the newest release is 4.0.2), where you can alter geometry based on its state in the last frame, so you can definitely do something with them. But I'm not enough of a node wizard to tell you how easy doing stuff like collisions and forces is.

Edit: and since you can mark geometry nodes as assets, you can reuse them across projects, if you make something you like.

Harvey Baldman
Jan 11, 2011

ATTORNEY AT LAW
Justice is bald, like an eagle, or Lady Liberty's docket.

Quick question. Sometimes when I get 3D scan data from folks it has these extra noise shell kind of things where the capture decides to make an extra surface over what's supposed to be there.



In ZBrush, what's the quickest way to deal with this kind of thing? I'm guessing a low-res dynamesh to get those gaps closed up and then just use projection settings and smoothing to average everything back out? Is that the right workflow?

Alterian
Jan 28, 2003

Thats more or less what I do for photogrammetry. I get my crazy high model from reality capture and duplicate it and then dynamesh -> Z remesher -> Divide to add more geo if needed and just projecting the source back onto my cleaned up model for geometry detail and vertex color.

cubicle gangster
Jun 26, 2005

magda, make the tea
Did a little sketch to learn some new parts of Tyflow.

https://i.imgur.com/ES3bMV4.mp4


He's been quiet the last year - over 12 months ago, someone was asking about fracturing getting a little more refined than the basic Voronoi and needing to pre-fracture. Tyson said 'yes, I think this can be improved'...

A few weeks ago, he dropped PRISM, his new fracturing engine, which i'm pretty sure is now the most advanced fracturing engine going. I'm just doing a really basic fracture based on physx contact points, but it can get way more complex and the speed it can remesh / the control you have over the newly generated geometry is insane.

Here's his feature demo - https://www.youtube.com/watch?v=hmFHIMcaqtY

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
That looks really good!

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope
Hell yeah. Basic voronoi fractures have annoyed me for a very long time because all the bits just look generally roundish

cubicle gangster
Jun 26, 2005

magda, make the tea
The way it can handle smoothing groups, UV mapping, and the amount of control you have over the slice planes that make the cuts, recursive dynamic fracturing etc.... it's absolutely wild. Rock solid stable too, it's ability to cleanly re-mesh everything with no errors in near real time trips me out.
Strongly recommend anyone who does sims have a play around!

EoinCannon
Aug 29, 2008

Grimey Drawer
Ive been using the new tyflow for booleans and slices on heavy meshes for a project at work and it's very fast and good.

Biohazard
Apr 17, 2002

cubicle gangster posted:

I was meaning to give you feedback on this sooner, but I had my second pfizer shot late last week and was out of action for a few days! sorry that this is a bit late, but it should be pretty universal.

This was from a few years ago and then I failed to check back to the thread until now, but thanks for the awesome feedback back then! I should post some of the stuff I've been working on since then one of these days.

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope
Me forming the hand of a character into a fist finger by finger for the third time in Blender: "Oh that's why you would use a pose library"

Ccs
Feb 25, 2011


So this is making the rounds at work today

https://openai.com/sora#capabilities

The artists under 30 I work with are saying this means we all won't have jobs in 5 years. Maybe I'm delusional or maybe I'm just jaded by so many cycles of tech hype (i worked with a lot of bitcoin bros at my first job and the bosses at my third studio were NFT hypers) but this reminds me of the amazing papers at Siggraph every year. Looks super cool! When can I expect it to become part of any workflow? ... *crickets*

Anyyyway, I'm sure it will further degrade the stock video market. And also make the general public assume that vfx work is all done by computers.

BonoMan
Feb 20, 2002

Jade Ear Joe
Yeah everyone has been passing this around internally. Imma be honest that's pretty frightening. Things are moving so fast and getting good so quickly that at this point I really don't feel comfortable making predictions. We are truly in "who the duck knows" territory. There will always be a place for human made art, but when I see the paper craft sea horse thing... I do really think there is going to be a lot of replacement on the lower to mid level video creation. I mean driven by a smaller team of humans but still.


And these companies rely on bringing these to market so I think it's not apples to apples with SIGGRAPH white papers.

Synthbuttrange
May 6, 2007

AI will soon be hunting artists down to scoop our brains out like delicious pudding. :(

Ccs
Feb 25, 2011


This is a convincing argument about why its not the end of the world, but i don't actually know if its right. Because AI is a black box to me and unless you're an AI researcher who isn't up themselves with their own hype, it's hard to know what the AI is actually doing.

quote:


I have watched carefully all the video examples in that link.

My honest view: That's cool progress on stability. I think that's the only good thing to mention.

Now, the rest. There is a reason why this works only with text to video and they didn't want to go any further for now.

I'll explain: With that prompt: A cartoon kangaroo disco dances, you can clearly see that is some shot from a movie. The dance isn't a coincidence (nothing is) it's the exact dance or a very similar one from a specific shot.
The same happens with every single video example shown there. You would think it's an original generated video, but in fact it's just blended input. You can't go beyond the footage you used for training. Ever. Why? Because magic only exists in the Harry Potter world. Pure and simple. Let's be rational here. Spontaneous generation doesn't exist.

So that's fun and cool, for sure. But it is very limitated as a tool to use in any professional space. Because if you mention or say something that isn't in the training as input, you'll end up with miserable results, ignored prompts or you'll find yourslef fighting forever to get exactly what you want.

This is the problem with AI, it can only "blend" what it already knows. It's not a robot out there having human experiences and getting fresh inputs. And this leads you exactly to the following place: the more specific you are, the more AI will ignore you or give you miserable results. Go ahead and try it. See it for yourself.
So that is the opposite from what anyone working in production wants.

So you end up realising you're better off doing the thing yourself instead of trying forever or promising that "maybe" you'll get a drat simple little change you're being asked, because there isn't a drat input that allows you to get exactly what you want.

So this is, to me, nothing but a shiny and fun gimmick to use at home for entertainment.

BonoMan
Feb 20, 2002

Jade Ear Joe

Ccs posted:

This is a convincing argument about why its not the end of the world, but i don't actually know if its right. Because AI is a black box to me and unless you're an AI researcher who isn't up themselves with their own hype, it's hard to know what the AI is actually doing.

I'm not sure that quote is saying much of anything. Yes we all know they're trained on models and that limits spontaneous results, but I don't think that's the threat.

First the quality of temporal stability here is the "oh poo poo" moment for me. That's huge. After that I think what you're going to end up seeing, eventually, as the end use-case is sort of a guided content creation. You give it rough shapes, or a rough phone-shot sequence and use that to guide your AI output. Then that *will* allow you to start getting exactly what you want. Sort of a Wonder Studio style approach. "Shoot your sequence on a phone and then we can replace the elements with the click of a button." Like... this C4D renderer Airen4D.

https://www.youtube.com/watch?v=q64ATr8mfzU

Text prompts + visually guided approach.

Jenny Agutter
Mar 18, 2009

Ccs posted:

So this is making the rounds at work today

https://openai.com/sora#capabilities

The artists under 30 I work with are saying this means we all won't have jobs in 5 years. Maybe I'm delusional or maybe I'm just jaded by so many cycles of tech hype (i worked with a lot of bitcoin bros at my first job and the bosses at my third studio were NFT hypers) but this reminds me of the amazing papers at Siggraph every year. Looks super cool! When can I expect it to become part of any workflow? ... *crickets*

Anyyyway, I'm sure it will further degrade the stock video market. And also make the general public assume that vfx work is all done by computers.

lol watch the legs and hands of the walking woman in the first sample

Ccs
Feb 25, 2011


BonoMan posted:

I'm not sure that quote is saying much of anything. Yes we all know they're trained on models and that limits spontaneous results, but I don't think that's the threat.

First the quality of temporal stability here is the "oh poo poo" moment for me. That's huge. After that I think what you're going to end up seeing, eventually, as the end use-case is sort of a guided content creation. You give it rough shapes, or a rough phone-shot sequence and use that to guide your AI output. Then that *will* allow you to start getting exactly what you want. Sort of a Wonder Studio style approach. "Shoot your sequence on a phone and then we can replace the elements with the click of a button." Like... this C4D renderer Airen4D.

https://www.youtube.com/watch?v=q64ATr8mfzU

Text prompts + visually guided approach.

Could be, could be. I'll probably be a skeptic until suddenly studios downsize a whole bunch due to AI. But honestly if AI is able to accomplish what people fear it could, I expect things to go this way- my job is outsourced to India in a few years, then AI gets good enough a few years later, then the Indian guys lose their jobs too. Outsourcing is a much bigger near term threat to my role and labor arbitrage isn't a magic technology with trillions of new datacenter investments behind it.

Koramei
Nov 11, 2011

I have three regrets
The first is to be born in Joseon.
I think there's still going to be plenty of creative roles, but the artists sticking their heads in the sand thinking it's just going to pass by are going to be in for a rude awakening. IMO it's a very adapt or die moment right now and probably will be for a good few years, aside from a lucky minority of artists that can cruise on star power.
3D generation still isn't there yet but it's getting notably better, and who the hell knows what's going to happen once if you get AI to create a video for 3D capture with Gaussian splatting and generates based on that or whatever.

Jenny Agutter posted:

lol watch the legs and hands of the walking woman in the first sample

On a 5 year timeline that's inconsequential. Look where Dall-e was just like a year and a half ago.

cubicle gangster
Jun 26, 2005

magda, make the tea
I think there are largely 2 differing reasons why someone wants to hire/work with a 3d artist - they're either buying the end result, or the process.

With a movie, the director is buying the process - the ability to say, I dont like that bit, can you do this or that to it. There is no possible result you could spit out of AI and have them say 'that's it!', because the entire purpose of the process is to be able to control it down to the finest level of nuance and they will always be exercising that right to have control.
It's the same with real estate marketing - the client is a developer getting to do a dry run of something that's about to cost them $250m+, it's the first time they're getting to see how the stone looks up against that much wood, how light bounces around, or how the $35k sofa really feels in the space. The process is what they're buying, the final images are almost a byproduct.

Architectural competition work, however - the details are unimportant, not figured out yet, and the purpose is to take these volumes and forms and make them look nice oh and you've got about 12 hours to do it.
There are absolutely some areas of being an artist that will get swallowed up - and the best thing I think anyone can do to protect themselves from it is remind themselves they're in a service industry, not a product-based one, and ensure every move they make and how they present themselves is focused on the service aspect.


Related to this that happened this week, we have a client on a project that wanted a full winter snow version of a dusk image. I had someone on my team who is in pretty deep with stable diffusion try to get a mockup from AI that we could use to get the client's approval, he spent 2 full days on it and the results were poo poo. I had a crack and used SD and midjourney, got absolute garbage. Cool looking images but they simply didnt work, the details were too loose and gestural, not something we could point to and commit to rolling into the 3d model, and not something we could even use elements of in a matte painting because it was all quite vague once you zoomed in.
I ended up pulling some elements from google image search and knocking up a matte painting in 90 minutes that was exactly what we needed, really razor-focused on what each element was to talk about what we'd do to the image, and while I was working on it I thought about how fast a good matte painter can work, how there's no loving way anyone banging out endless variants of prompts can dial into a mood someone else is attempting to describe faster than a good concept artist.
Right now AI inserted into production can at best hope to replace the 'rough mood paintover' stage, if that, and it's slower and worse at it than any human who's done it before. How advanced it is really is very surface level when you try to work with it in a meaningful way.

AI is a very useful tool for a lot of things (turning poo poo 3d rendered people into ones that look sort of human), but for someone even vaguely visual, it just can't and will never get what's in your head out better than rolling up your sleeves. It's a bunch of people who arent creative or visual that think it's going to truly take over and decimate creative fields.

EoinCannon
Aug 29, 2008

Grimey Drawer
Yeah pixel loving directors will protect us to some extent in the commercials business.

Alterian
Jan 28, 2003

Even before AI, I've been telling my students for years that the days of getting hired as an entry level 3d artist on a game to model chairs and simple props is over. That sort of work has been outsourced for a long time. They need to think about what they can actually bring to the table. They need to up their art theory and design skills as well as understanding how to make something audiences will like. You also need to have a deeper understanding of whats going on in the guts of what you're doing so when the AI breaks, you can go in and fix it. Yeah, sure you can ask chat gpt to write you a C# script to make your lights flicker in Unity. Do you know how to actually put it in the engine? What if it needs to be triggered at a certain time. Can you figure that out?

mutata
Mar 1, 2003

Edit: nevermind, misunderstood. In my defense, it's 6am and I'm deep inside LAX at the moment.

mutata fucked around with this message at 15:28 on Feb 16, 2024

BonoMan
Feb 20, 2002

Jade Ear Joe

cubicle gangster posted:

I think there are largely 2 differing reasons why someone wants to hire/work with a 3d artist - they're either buying the end result, or the process.

With a movie, the director is buying the process - the ability to say, I dont like that bit, can you do this or that to it. There is no possible result you could spit out of AI and have them say 'that's it!', because the entire purpose of the process is to be able to control it down to the finest level of nuance and they will always be exercising that right to have control.
It's the same with real estate marketing - the client is a developer getting to do a dry run of something that's about to cost them $250m+, it's the first time they're getting to see how the stone looks up against that much wood, how light bounces around, or how the $35k sofa really feels in the space. The process is what they're buying, the final images are almost a byproduct.

Architectural competition work, however - the details are unimportant, not figured out yet, and the purpose is to take these volumes and forms and make them look nice oh and you've got about 12 hours to do it.
There are absolutely some areas of being an artist that will get swallowed up - and the best thing I think anyone can do to protect themselves from it is remind themselves they're in a service industry, not a product-based one, and ensure every move they make and how they present themselves is focused on the service aspect.


Related to this that happened this week, we have a client on a project that wanted a full winter snow version of a dusk image. I had someone on my team who is in pretty deep with stable diffusion try to get a mockup from AI that we could use to get the client's approval, he spent 2 full days on it and the results were poo poo. I had a crack and used SD and midjourney, got absolute garbage. Cool looking images but they simply didnt work, the details were too loose and gestural, not something we could point to and commit to rolling into the 3d model, and not something we could even use elements of in a matte painting because it was all quite vague once you zoomed in.
I ended up pulling some elements from google image search and knocking up a matte painting in 90 minutes that was exactly what we needed, really razor-focused on what each element was to talk about what we'd do to the image, and while I was working on it I thought about how fast a good matte painter can work, how there's no loving way anyone banging out endless variants of prompts can dial into a mood someone else is attempting to describe faster than a good concept artist.
Right now AI inserted into production can at best hope to replace the 'rough mood paintover' stage, if that, and it's slower and worse at it than any human who's done it before. How advanced it is really is very surface level when you try to work with it in a meaningful way.

AI is a very useful tool for a lot of things (turning poo poo 3d rendered people into ones that look sort of human), but for someone even vaguely visual, it just can't and will never get what's in your head out better than rolling up your sleeves. It's a bunch of people who arent creative or visual that think it's going to truly take over and decimate creative fields.

Yeah this is why I specifically mentioned mid to lower level videos. I'm in a spot to be able to see high end projects all the way to low end projects (and deal with all the respective client levels) and the mid to low level absolutely will be game with just having 1 or 2 internal people just churning out "close enough" AI stuff. But the higher end folks will always want too much control to probably rely on it.

That said, with how fast things move, I have learned never to say "never." It's getting crazy. And it's going to be the user driven AI (meaning someone giving it a bit of a visual push in the right direction with composition/camera/etc) that is going to be the real threat later on.

In respect to your specific problem, this seems exactly what Photoshop's generative AI is for vs SD/DallE/MJ/etc. Did y'all try that?

edit: oh totally forgot to mention that a former employee that now works at a VERY LARGE computer company just said they have been given the directive to reduce costs as much as possible through the use of generative AI and that vendors who tout those capabilities will be bumped up the priority list. So it's 100% having an actual tangible effect. (that's not in response to anything you said, just a note in general that happened to be mentioned two days ago)

edit 2: oh and this is another interesting thing (that falls firmly in "using AI as a cool tool for human artists") but this guy created a splat from just 9 seconds of one of those clips. OpenAI says they can create 60 second clips. This falls into rougher "SIGGRAPH whitepaper" category, but still very cool.

https://radiancefields.com/openai-launches-sora-and-the-world/

BonoMan fucked around with this message at 16:07 on Feb 16, 2024

cubicle gangster
Jun 26, 2005

magda, make the tea

BonoMan posted:

In respect to your specific problem, this seems exactly what Photoshop's generative AI is for vs SD/DallE/MJ/etc. Did y'all try that?


yeah, that was also a bit poo poo vs simply perspective cropping a photo, warping and using some blending modes - the thing i experienced with it was that i'd already got a pretty solid idea in my head of what the vibe should be like, even if the details werent fixed, and no amount of automatic generation would hit it. it felt like it would, and seemed like it should, but then I was endlessly generating and going 'no, not right... not right...'
Which led to a realisation that a director sat with a matte painter can knock out an outrageous vibe sketch that gets the point across in 30 minutes to an hour, before that painter spends another 4-5 hours making it high res and logically consistent. For an example of a space within digital art that AI might take over, it made me realize that it's got a long way to go. you can waste hours generating images from prompts but unless you had little to no consistent vision to begin with and never cared that much about the details, it wont cut it.

And yeah, there's absolutely a ton of people that just need assets and dont care about this sort of thing. Now is a very good time for people to reevaluate how they position themselves as an artist, the kind of clients they work for, and where they fit into the process.


And some tangental musings, we find it hard to hire anyway, so many CG artists in our industry these days have worse portfolios than we were seeing a decade ago. The images present very well, but I mean worse like you can take a closer look and see much lower attention to detail, less genuine craft used in making them. it's drag-and-drop assets with a LUT and framebuffer preset, images knocked together in a day. Maybe this will force people to step up their game a bit and light a fire under them.

Ccs
Feb 25, 2011


I guess I won't need to imagine very long how long it will take AI to make or break certain industries because of the amount of investment its getting and how much CEOs are salivating over it. I'm used to annoying tech hype being nothing but bluster, but occasionally a piece of tech will come along that will really amaze me. I was on one film where we had to match faces to actors exactly using facial action coding system rigs and so for months it was animators laboriously matching every motion on the actors faces and then cfx artists coming in a doing shot sculpt to match bits that the rig couldn't hit. Then at some point that was getting so costly that they introduced a tool that just automated it. Completely automated it. Everyone could go back to focusing on body mechanics and their other tasks instead of dealing with the faces anymore. It was incredible. It happened in like the last 2 months of the project when things were super super crazy.

Then a couple years later i was on a project that needed that exact same thing at the same company. And I asked if anyone knew about that tool. But apparently the developers left and the tool was discontinued and there wasn't enough documentation for anyone else to pick it up. So we went back to matching by hand....

BonoMan
Feb 20, 2002

Jade Ear Joe

cubicle gangster posted:




And some tangental musings, we find it hard to hire anyway, so many CG artists in our industry these days have worse portfolios than we were seeing a decade ago. The images present very well, but I mean worse like you can take a closer look and see much lower attention to detail, less genuine craft used in making them. it's drag-and-drop assets with a LUT and framebuffer preset, images knocked together in a day. Maybe this will force people to step up their game a bit and light a fire under them.

And NOBODY KNOWS HOW TO UNWRAP UVs anymore. People just tri-planar mapping everything.

mutata
Mar 1, 2003

Yeah, I don't have time for perfectly clean UVs and now substance painter makes being lazy with UVs more usable, I guess.

mutata fucked around with this message at 13:32 on Feb 17, 2024

Listerine
Jan 5, 2005

Exquisite Corpse
Ooooh the Nomad Sculpt guy posted he thinks desktop might come as early as March, but Q1 or Q2 is the target.

Alterian
Jan 28, 2003

I love unwrapping. It's very zen for me.

Adbot
ADBOT LOVES YOU

EoinCannon
Aug 29, 2008

Grimey Drawer

Alterian posted:

I love unwrapping. It's very zen for me.

If I'm not super pressed for time it's a nice break where the brain can just do something procedural and non creative

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply