Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Gearman
Dec 6, 2011

Just filed my taxes this year and noticed that they actually have a section for "cryptocurrency" now. The IRS is well aware, and they'll get theirs one way or another.

Adbot
ADBOT LOVES YOU

Gearman
Dec 6, 2011

EoinCannon posted:

I have a client that wants to be able to view an .obj on his macbook. Rotate, zoom etc. so he can use it as reference for a sculpture
It's going to be a decimated detailed zbrush model so maybe around 1mil polys

What would be a good app for this? He's probably not super 3d literate so he probably won't mess around with Blender.
I don't have a mac so I can't test stuff out

Autodesk has a free viewer app specifically for this kind of thing: https://viewer.autodesk.com/

Gearman
Dec 6, 2011

EoinCannon posted:

It will be worth it though because all the extra revenue will mean they can invest in improving the product so much more

Can they? Yes. Will they? Probably for the next year or two while all the original folks that care about the product are still there. Once those people have started to move on to other things I'm expecting very little to happen with it.

Gearman
Dec 6, 2011

Yeah there really isn't anything on the same level as Zbrush, unfortunately.

Gearman
Dec 6, 2011

Alterian posted:

Are there any photogrammetry threads floating around these boards?

Not that I'm aware of, but I have a ton of experience with it and I'm always happy to talk shop!

Gearman
Dec 6, 2011

Alterian posted:

Yay! What software do you use? I use Reality Capture. Here is some of the stuff I've poked at:
https://sketchfab.com/Lief3D

I have a Kress Foundation grant right now to try to come up with a practical workflow for capturing roughness/specular in a more automatic way.

Nice, some pretty good stuff there! Looks like you're also remeshing and rebuilding some parts that don't scan well (like wires). I mostly use Agisoft Metashape (formerly Photoscan), but I've used Reality Capture a bunch too, and if I were starting from square one today I'd probably use Reality Capture instead. I have an extremely small sample of some scanned stuff in here if you'd like to check it out: https://clark.artstation.com/. I spent about five years across Rockstar and Wayfair doing Photogrammetry R&D for games, realtime AR/VR, and offline rendering. If you have the Wayfair app and use the 'View in Room' option on a product that is anything that you don't sit on, and isn't artwork, it's likely something that was scanned or went through the photogrammetry pipeline.

Most of my Wayfair R&D around photogrammetry was highly specific to the needs of the company, but I did spend quite a bit of time trying to get automatic roughness/specular, or at least acquiring them in a faster way. Unfortunately, it was going to require significant changes to our scanning hardware that proved to be more trouble than it was worth for our use cases. If you're scanning materials, it's a bit more straightforward and easier, but when you're trying to scan an entire object, let alone objects varying in size, it becomes incredibly challenging. I came to the realization that I would need multiple images from very similar positions, under different polarizations of light to get somewhere and, while I was able to solve that to some degree, it was still too difficult to get consistent results. That said, that was several years ago, and I haven't really been looking too closely at advancements in that space, so there might have been some more novel and feasible approaches to it since then.

Gearman
Dec 6, 2011

Yep, Parsec is the answer.

It's designed to prioritize low latency over visual quality so occasionally you'll see your screen get a little low res but your mouse movement will still be lag free. I use it for work now and I'd recommend it above all the other remote apps for creatives.

Gearman
Dec 6, 2011

sigma 6 posted:

Modifiers are very much how 3D Studio Max works as well. Point is, no modifiers are needed. It just works that way by default. Also renders that way.

I always found it interesting how Blender seemed to take a cue from Max with the modifier stack. Like 3d artists need to see how meshes are modified by a linear layering stack a la Photoshop. Node based software like Houdini and Touchdesigner don't have such a linear / vertical style workflow. Nor do most shading networks these days in Maya or Max or Blender. Or compositing software like Nuke, or Black Magic Fusion or uh... Shake... which was the first software I can remember that used node based workflow. Although I know Houdini was doing it even before Shake was.

The stack is really only used for modifier modeling. Blender uses node-based networks for a ton of other things including the shading networks, compositing, geometry nodes, etc.

That said, the stack is amazing and putting it in Blender made it very easy for me to move over from Max.

Gearman fucked around with this message at 12:05 on Jun 22, 2022

Gearman
Dec 6, 2011

bop bop perano posted:

Someone posted a video of some professional CG dude who’s worked on tons of films and music videos etc. and it showed his crazy studio that looked sort of like a small home theater and he was sitting on the couch with a tablet in his lap and I was like drat, why am I not doing that. So I bought a cheap little like 8 key usb keyboard thing that you can program whatever keys to*** and then I just plugged in my laptop to my tv and sat on the couch with my drawing tablet on my lap, which has been really comfortable and I’m glad that I saw it because I wouldn’t have thought of doing it that way. I’ve also just been trying to avoid using the computer at my desk which is whatever but it’s not exactly comfortable and also helps to feel like I’m not working at an office job, when I’m working on music or blender stuff.



**(I need stickers to use to mark the keys because I’ve just had to memorize the layout and I just thought about like literally the 8 most used keyboard shortcuts for blender, but also trying to factor in what shortcuts I can easily work around by just clicking on the interface instead.. but yeah, idk I’ve been unsure about what the best layout is, so if someone uses something similar do you have any suggestions?)

drat that sounds cool as hell. Do you have a link to the video for reference?

Gearman
Dec 6, 2011

For UV Packing in Blender, I've been really impressed with UVPackmaster: https://glukoz.gumroad.com/l/uvpackmaster3
It's $45, but the number of features, and quality of packing that it does is incredible. Also, it's incredibly fast and super efficient.

For UV unwrapping in Blender, the best, free, package that I can recommend is probably TexTools: https://blenderartists.org/t/textools-for-blender/700811.
For older Max users, the name might sound familiar, because it's the same TexTools from Max, ported over to Blender. Lots of great features and tools at a great price (which, again, is $0)

Gearman
Dec 6, 2011

BonoMan posted:

Right, no I'm familiar with all of those services. I was just working on a project where I brought some Wonder Studio stuff into Blender. It was super cool.

I was just wondering what the real world usage of Blender is like in game studios and if Maya being a gold standard of animation kept studios in that pipeline.

(for me personally, I'm not in games, but am in the 3D/mograph world. I started on Maya v1 back in "the day." Then on to C4D/Octane. Now I'm finishing up my remaining C4D jobs and moving on to Blender as I'm the last person here at our place NOT on Blender.)

Maya/Max are still the gold standard in the industry but absolutely no one at this point really cares what software package you're used to. The portfolio is king and there's always an expectation of tools training for every industry job.

As for Blender, I know it's in use at a bunch of bigger studios these days, like Bungie (seen in the Marathon Vidoc) and Ubisoft (a member of the Blender Development Fund https://www.blender.org/press/ubisoft-joins-blender-development-fund/), and everywhere in the indie scene. As a lifelong user of Max, I generally advocate for Blender these days, as there's a much larger plugin ecosystem, and many QOL features that Max and Maya don't have. It's also great to be able to use the same software for professional and personal work.

Gearman
Dec 6, 2011

Same. I'm really curious if they approach any of the incredibly good boolean plugins available for Blender.

Gearman
Dec 6, 2011

Listerine posted:

Someone on twitter is building a blender plugin that does vdb booleans, I'll post it if I can find it again. Although not sure if he's building it for distribution or just for himself...

Which ones do you like?

Boxcutter and Hard Ops are pretty much the gold standard but a few others like Speedflow and Meshmachine are really good, too.

I don't think Blender has a plug-in that's as solid as Max's Smart Extrude, though. That does look really impressive.

Gearman
Dec 6, 2011

sigma 6 posted:

Photobashing an image or using a bot to do the compositing for you? Did you think this wasn't inevitable? (re: Wonderstudio)

Are you going to ban using photoshop because it uses AI or just the AI features of photoshop? ( ... Which Adobe says only draws from its Adobe stock library. )

As for legality: Yes - ChatGPT has been made illegal in a few countries. As far as I know no image generators have been made illegal however and what's more, "AI Images Aren’t Protected By Copyright Law According To U.S. Copyright Office". However, as the tech gets better, who is to tell what is generated by DALL E or some time in photoshop and / or Firefly? Or a combination of both?

Artists that photobash also have the choice to, and should, get rights to the photos that they're using. Legal battles for AI images are ongoing, with some preliminary judgements, but as with all legal cases in the US, the first ruling is rarely the final ruling.

It's fine to be looking at these kinds of tools with interest and intrigue, but you should also understand that many people are vehemently against them for entirely valid reasons.

Gearman
Dec 6, 2011

Definitely missed it the first time. Congrats! You should absolutely be proud, because that is an awesome reel. Top notch lighting and animation and rendering work all-around.

Gearman
Dec 6, 2011

Arrimus3D is also pretty good for learning hard surface. He definitely falls into the "if it looks good, it's good" camp for fundamental subd modeling, which is fine for most cases. He also covers hard surface modeling in a bunch of different software packages.

Edit: Andrew AKA Blender Guru's biggest crime, IMO, is sounding so happy in all of his tutorials. Everyone knows that the best and most useful tutorials are made by people at 2AM, where they're burned out, exhausted, half asleep, and sound as if they are begrudgingly recording their walkthrough for three other people on the Internet.

Gearman fucked around with this message at 13:55 on Mar 6, 2024

Gearman
Dec 6, 2011

Ccs posted:

What, like full vfx for tv and movies type shows? I wasn't aware AI was at a place where it could conceivably meet client demands to do that.

Do you mind me asking which studio was it?

It absolutely cannot. Anyone laying off experienced creative staff in the belief that AI is good enough to replace them is going to find themselves underwater incredibly quickly.

Gearman
Dec 6, 2011

As someone that has been assessing current and future-potential AI tools for 3D production as part of my day job... your jobs are safe (mostly). The current industry-leader in the AI 3D space is still producing most of their content by real humans. I suspect they are using their current position as a leader to raise enough capital (and hopefully generate enough ARR from their store) in the hopes it keeps them afloat long enough for 3D AI to mature to where they can reduce their opex through reduction of their significant human labor. Even then, the results are "OK" at best. I still have a junior doing a ton of cleanup -- including UV's by hand -- because AI has no idea how to do that at all in a sensible way. Just to lay my credentials on the table so that people don't think I'm some rando:

- 15+ years of experience in games as a 3D artist and tech artist working on props, environments, tech art, etc. on games including Grand Theft Auto, Red Dead Redemption, etc.
- 5+ in the tech industry leading R&D teams on AR, VR, MR, spatial computing, automated photogrammetry and material scanning.
- Several patents related to material scanning and photogrammetry and model generation

A few off-the-cuff thoughts and comments about AI for 3D:

1. The more a company cares about quality and specific details, the less value they will get out of AI 3D modeling. I can't stress this enough. If your art director/client/stakeholder/whoever is really picky, then 3D AI will deliver increasingly less value. Your hero assets will need to still be made entirely by hand.
2. I think pretty much everyone looking at 3D AI generation is vastly underestimating the gulf in complexity from 2D -> 3D. Most are thinking it's a 2x jump in complexity from 2D, when in reality it's closer to 10x or even 100x.
3. Your jobs are "mostly" safe because the only jobs that are put more in danger are the low-level box-and-barrel prop makers. Those jobs were already in danger from outsourcing decades ago, and AI will put the squeeze on them even further. That said, companies will always need juniors to grow them into senior talent. Eliminating junior roles is incredibly short-sighted and will ultimately kill any business, and any industry, with 3D needs, the number of which is growing by the day (ROI on 3D imagery vs 2D imagery for consumer goods business is significantly higher than the cost of producing the content).
4. Smaller teams will see more value than the larger incumbents, but only in terms of quantity of output. Quality of output will still be significantly lower than with teams of experienced artists with an eye for quality.
5. Anything dealing with 2D is moderately at risk, but still needs a lot of touchup by an experienced artist. One note about this: the fundamental thing to consider about these AI tools is that you rarely get exactly what you want. You can only increase your odds of getting what you're looking for, but you're almost never able to get exactly what you want -- much like traditional outsourcing where better and more experienced outsource artists are more valuable.
6. As always, tools are temporary, fundamentals are forever. Companies will still need artists with solid fundamentals to work with whatever the AI spits out. It's not much different than the artistic skill needs around photogrammetry models, and mocap data. That is to say, great artists will turn that input into great assets, and bad artists will turn that input into bad assets.
7. Anyone saying that AI will transform what they do is, frankly, ignorant of the realities of a production environment that has to use 2D or 3D assets.

Gearman fucked around with this message at 14:13 on Apr 10, 2024

Gearman
Dec 6, 2011

roomtone posted:

Thanks for writing that up - it's encouraging to hear from someone in your position. I was thinking of emailing places near me I'd like to work to get a sense of how/if they are using AI, too.

I also read an article yesterday which calmed me down a bit about it: https://www.wheresyoured.at/bubble-trouble/

I've been back at work on things today, my mind feels a bit cleared up, and hearing things from people who know a lot more than me is good.

That article aligns similarly with my experience in machine learning. I worked on a project where we leveraged synthetic training data (2D image renders) to help identify objects in an image because real images were 1) Hard to acquire in the format that we needed and 2) Photos still needed lots of manual work to do basic things like draw boxes around objects. The best results came from a dataset that used a mix of real data and synthetic data. The worst performer was the dataset that only used 2d renders. To add on to this, the available dataset of 3D assets is miniscule in comparison to the available datasets for text and 2D images. Even beyond that, 3D assets have many more specific needs than text and 2D. There is a massive gulf in the requirements of a dinosaur used for something like Jurassic Park, and a small toy dinosaur in a video game that sits on a shelf. All of the current AI 3D tools can kind of make the latter, albeit poorly, but none are anywhere even close to the former. The real value is in the former.


Koramei posted:

I'm really interested in your perspective with your background in photogrammetry. With how advanced that's been getting particularly with 3D gaussian splatting and such just literally the past 6 months or so, I've been feeling like it's basically game over once it's actually editable and usable in a regular workflow, which can't be that far off. Use AI video gen to create a perfectly lit turnaround of any asset, then just use the already tested workflow to turn that video into 3D. Boom, you're basically done.

In my head this sounds watertight and like the doom of all of us but then I've mostly only approached scanning and the like in my spare time as a hobbyist rather than for actual production.

Gaussian splatting and AI video gen in general, I think have the same fundamental problem as technology like photogrammetry and VR: They all demo extremely well, but in practical use they have a lot of shortcomings. It's easy to get excited about them because they look so cool -- until you start having to use them or develop them outside of a demo environment.

On gaussian splatting: It definitely looks very cool, but has the very same fundamental issues as photogrammetry:
1. It can only recreate something that already exists
2. The quality of the output is highly dependent on the quality of the input.
3. There is no easy way to get proper material definitions

As a technology to quickly create an immersive version of a real thing, I think it's 100% cool and useful as a visualization tool, but with very limited use for asset creation. Companies like Zillow or RedFin will probably eat it up for walkthroughs of homes (they already do something similar to this with Matterport scanners, and gaussian splatting can use that same input data to make the experience more immersive at relatively little additional cost. Museums, or places looking to preserve things for historical purposes will also have a use for it as well. You'll likely see some small, and limited experiences built from them as well, but ultimately you'll likely see it used less than photogrammetry, which is still relatively niche compared to the vast quantities of 3D assets being created today by hand.

AI video currently slams head-first into the problem of #2 from above, especially in terms of consistency from frame-to-frame. This is because one of the core requirements of photogrammetry, and data for gaussian splats, is imagery that is highly consistent from image to image. This is why anything that has a high level of reflection (mirrors, shiny car paint, chrome, etc.), transparency (glass of any kind), small thin pieces (plants with small needles, hairs, fur, etc.), anything that moves (humans, animals, things moving in the wind, etc. caveat here is that you can overcome this with a large number of cameras, which is why all head and body scanning rigs have dozens of cameras), anything that can't be easily seen by the camera or with small occluded areas (a barely open pinecone, or folds in a shirt) are all very poor candidates for photogrammetry.

On top of that, you need really consistent lighting, which is why overcast days for outdoor scanning, or highly controlled indoor lighting environments are needed for even good results. Even the best AI really struggles with making a single 2D image without glaring errors. Most good 2D AI images still requires multiple passes with selections of certain areas to be regenerated to fix issues for any kind of art-directed requirement. The consistency between AI generated frames is significantly worse. In motion it looks kind of OK, but even half-decent artists can quickly spot all the glaring errors. If a human can see those errors, then the results for anything being generated with gaussian splatting and photogrammetry are going to be even worse. There are some very limited scenarios where I would say the results are kind of-ok, but that's when the subject is pretty far away from the camera, the camera is moving pretty quickly, and the animation is very short (a few seconds at most). After all of this there's still the fundamental problem of material definitions which are extremely difficult to get correctly and are a fundamental component of creating something that looks even somewhat correct.

All of the current solutions for material definition generation, outside of the technically correct ones (using a system of dozens of lights and taking dozens of photos), are largely just making wild guesses based on values in the image, rather than what the materials in the image actually are. These are things like the image-to-material software packages you see either standalone or inside software like Substance Designer, Substance Sampler, etc. They all have no way of knowing what the material properties should be for certain materials in an image. Unfortunately, the training data necessary for this also exists in very small quantities and almost entirely within the walls of academic institutions. I haven't seen anything that really makes any significant headway here in a while. With all the big money and attention (which drives grants and thus academic research) being in AI 2D, 3D, animation and simulation, there's not much attention in this space so I don't expect to see much headway on it any time soon, either.

Gaussian splatting + AI Video gen will likely only be game-over until it significantly improves and solves the consistency and material problems. People making quick and cheap content with largely off-the-shelf assets will see some value here, for sure. Anyone with high technical and artistic needs will not see much value from it, with early and fast concepting being the only real advantage they could conceivably leverage.

Gearman fucked around with this message at 16:29 on Apr 10, 2024

Gearman
Dec 6, 2011

Apologies for the double-post here, but with all the discussion around AI, I think it's warranted to have a separate post for my call-to-action for anyone working in the arts or thinking about art as a career:

1. If you like art, if you enjoy making art, DON'T STOP MAKING ART. We all have something to say, and in my case, I'm more of a show-don't-tell kind of person. My writing sucks, and I communicate better with pictures -- he says as he communicates entirely in text on some old forums. Without the ability to make art, I don't have much of a voice. Art, made by humans, is important.

2. If you want to have a career making art, FOCUS ON THE FUNDAMENTALS. Tools are temporary, fundamentals are forever. Focus on the basics of things like light, form, color theory, contrast, repetition, etc. New software comes and goes every week, but sound fundamentals always transfer over and make a difference. Craig Mullins pastes photos on a canvas in Photoshop and is one of the most highly sought-after artists in the world, because he is also an incredibly good artist. I've probably spent, collectively, a year of my life looking at portfolios just begging for a morsel of evidence that the artist has sound fundamentals. It's easier, and getting easier, to make art now but it's still hard to make good art.

3. Develop an EYE FOR QUALITY. Being able to tell when something is bad, and knowing how to fix it is extremely valuable. This is something many juniors lack, and really only comes through experience. People that have good taste are immediately recognizable over those that don't. This will always have value.

4. If you're afraid of something, LEARN MORE ABOUT IT. AI was really scary to me a few years ago. Now that I've really dug into it, it's not so scary and I realize just how much it falls short. Similarly, I love photogrammetry, and that had a lot of people worried about losing their job, too. Today, most people understand the positives and negatives of photogrammetry and no one has lost their job over it. Ok, maybe that one guy who only knows how to do photogrammetry and refuses to learn anything else. Don't be that guy.

5. Accept that some jobs WILL BE DIFFERENT. There are now animation jobs that are largely mocap cleanup. Some jobs that are just taking assets from Quixel and shoving them into Unreal. Soon there will be some jobs where you're just touching up AI images, or AI models. I don't believe this will lead to the downfall of the role of production artists. There will still be jobs, but some will be different. I firmly believe this to be true.


There are many other important points I'd like to make, which at this point I should probably just package up into a GDC talk, but I think these are the most important for anyone grappling with the world as it is today and the evolving nature of AI. It's new, and somewhat scary, but it's going to be OK. Good artists are still going to be very employable.

Gearman fucked around with this message at 16:02 on Apr 10, 2024

Gearman
Dec 6, 2011

EoinCannon posted:

I can't read the article, who's going to buy them?
Maxon has redshift, Autodesk has arnold

It's pretty much either Adobe or a Meta/Microsoft/Google that could integrate it into their cloud rendering service.

Adbot
ADBOT LOVES YOU

Gearman
Dec 6, 2011

Yeah, archviz uses it a ton. Lots of e-commerce companies that are doing 3D imagery do as well and they're collectively probably a bigger market than the entirety of VFX.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply