Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Blooshoo
May 15, 2004
I'm a newbie
I just read a cool article about magic leap and I'm thinking of getting one of these VR things now, even though I know it'll be cheaper and better in a few years. I ordered some google cardboards because im cheap and it might be fun to poke around without a huge monetary commitment. Anyone try those out and recommend any apps or what have you?

Adbot
ADBOT LOVES YOU

somethingawful bf
Jun 17, 2005
Posted this before, but it seems to come up often. This is the largest space I've seen done with Touch so far
https://www.youtube.com/watch?v=BEhOivWqGmA

It's ~18 feet diagonal between Rift sensors.

Bremen
Jul 20, 2006

Our God..... is an awesome God

Blooshoo posted:

I just read a cool article about magic leap and I'm thinking of getting one of these VR things now, even though I know it'll be cheaper and better in a few years. I ordered some google cardboards because im cheap and it might be fun to poke around without a huge monetary commitment. Anyone try those out and recommend any apps or what have you?

I don't have Google Cardboard, but from everything I've heard, you get what you pay for. I've heard it's best for watching 3d movies.

TANSTAAFL
Feb 10, 2006
fubar

Poetic Justice posted:

Posted this before, but it seems to come up often. This is the largest space I've seen done with Touch so far
https://www.youtube.com/watch?v=BEhOivWqGmA

It's ~18 feet diagonal between Rift sensors.

I guess that might be true but what i see in that video looks a bit spotty and i've been reading a few impressions that seem pretty reasonable.
Recently the dev of Painty was talking about his "roomscale" touch experience and pretty much summed it all up with "it requires a very specific setup to have it sorta but usually not completely work.".

I actually would much rather it did work then I'd end up with more 2nd hand oculus ports that fully support roomscale :)

somethingawful bf
Jun 17, 2005

TANSTAAFL posted:

I guess that might be true but what i see in that video looks a bit spotty and i've been reading a few impressions that seem pretty reasonable.
Recently the dev of Painty was talking about his "roomscale" touch experience and pretty much summed it all up with "it requires a very specific setup to have it sorta but usually not completely work.".

I actually would much rather it did work then I'd end up with more 2nd hand oculus ports that fully support roomscale :)

Well, more than likely the "specific" setup is having the sensors caddy corner like you do with the Lighthouses. :) And I agree with the spotty ness. I wish the dude would have just used normal USB3 cables like you are supposed to instead of a USB1 cable and a USB2 extension.

TANSTAAFL
Feb 10, 2006
fubar

Poetic Justice posted:

Well, more than likely the "specific" setup is having the sensors caddy corner like you do with the Lighthouses. :)

The dev specifically mentions he's got them setup exactly like his lighthouse and still has issues.

somethingawful bf
Jun 17, 2005

TANSTAAFL posted:

The dev specifically mentions he's got them setup exactly like his lighthouse and still has issues.

Yah, see my edit though, he's using USB1 and 2 cables when you need USB3 cables for proper tracking.

edit: Oh, wait you are talking about the other guy. Well, I mean, like I said 18f diagonal is larger than what Valve recommends and there is a video showing mostly flawless tracking. Again, that is using USB1 and 2 cables. So I would rather go by video proof versus just what some dev says, but that's just me. If somebody is on the fence I would recommend they wait and see what Touch is capable of, if they have the patience. The games aren't going anywhere.

somethingawful bf fucked around with this message at 04:03 on Jun 17, 2016

TANSTAAFL
Feb 10, 2006
fubar

Poetic Justice posted:

Yah, see my edit though, he's using USB1 and 2 cables when you need USB3 cables for proper tracking.

edit: Oh, wait you are talking about the other guy.

I might have missed it but i don't think the painty dev mentioned his USB setup though i would like to know if it's optimal or like you said using something without enough bandwidth. The video you linked has the usb 1 and 2 cables i think so yeah not the best test for sure. Like i said i hope it does actually work perfectly with an ideal setup by the time it's final.

*haha all these edits are making this confusing, i'm just going to leave things this way for posterity.

somethingawful bf
Jun 17, 2005

TANSTAAFL posted:

I might have missed it but i don't think the painty dev mentioned his USB setup though i would like to know if it's optimal or like you said using something without enough bandwidth. The video you linked has the usb 1 and 2 cables i think so yeah not the best test for sure. Like i said i hope it does actually work perfectly with an ideal setup by the time it's final.

*haha all these edits are making this confusing, i'm just going to leave things this way for posterity.

Haha yeah I get what you are saying. I do think the Vive will be better for larger spaces, no doubt, but I think generally with 2 sensors the Rift will be fine for a majority of the people out there. Again, I would recommend most people just wait and see what the full Touch+Headset is capable of when Touch comes out. Like I said the games aren't going anywhere, likely to be cheaper, and when Touch is out I think people can make a more informed decision with facts from multiple people using it, instead of random devs. Of course there are some people might want to make a statement with their money and hate how Oculus has a lot of exclusives and that's fine, I just wanna play games ;)

Kazy
Oct 23, 2006

0x38: FLOPPY_INTERNAL_ERROR

Good thing someone here has a Touch and is going to do some testing once they get their USB3 extender in :v:

Truga
May 4, 2014
Lipstick Apathy

Kazy posted:

Good thing someone here has a Touch and is going to do some testing once they get their USB3 extender in :v:

Thank you!

Pi Mu Rho
Apr 25, 2007

College Slice

Tom Guycot posted:

You made this? is photogrammetry easy enough for a novice to do? I thought it was something you needed like, 12 cameras and a ton of equipment to even do. Any chance of knowing how you go about doing something like this, or a guide online somewhere? I'm going back to visit my parents in a few weeks and I love the idea of being able to do a photoggrammetry capture of their house, cabin, yard and other stuff.

as homeless snail said, Agisoft Photoscan and pretty much any old camera will do. The two things you have to learn are:
1) The best way to photograph a scene for the best coverage and detail and
2) The best way to use the software to get the results you want.

The video I showed was pretty much a first pass at a very complex scene, with lots of fiddly geometry like the benches, and lots of flat planes, which generally need touching up (or replacing entirely) in a 3D package. It's much better suited to organic scenes.

I found this to be a good source of information:
http://steamcommunity.com/games/250820/announcements/detail/117448248511524033

I've just bought a pair of decent DSLRs and I'm going to try out a variety of different scenes to see what works best next.

Tom Guycot
Oct 15, 2008

Chief of Governors


Pi Mu Rho posted:

as homeless snail said, Agisoft Photoscan and pretty much any old camera will do. The two things you have to learn are:
1) The best way to photograph a scene for the best coverage and detail and
2) The best way to use the software to get the results you want.

The video I showed was pretty much a first pass at a very complex scene, with lots of fiddly geometry like the benches, and lots of flat planes, which generally need touching up (or replacing entirely) in a 3D package. It's much better suited to organic scenes.

I found this to be a good source of information:
http://steamcommunity.com/games/250820/announcements/detail/117448248511524033

I've just bought a pair of decent DSLRs and I'm going to try out a variety of different scenes to see what works best next.

Thanks, I actually found that link when I started researching this morning after the posts about it. There was also a great guide from the destinations folks that was linked on their workshop page. I'm actually trying a first test thats compiling or whatever in the trial of that photoscan program as I type this.

How long does it usually take to compile these scenes? I have a hunch I took way too many pictures from all the wrong positions. I saw the devs of that destinations program saying in their guide they had about 450 photos to capture that english church, while i ended up with 800 some from just my living room as a test...

Edit: also, I'm curious what quality settings you use in photoscan, say for that example you posted?

Tom Guycot fucked around with this message at 10:17 on Jun 17, 2016

Fredrik1
Jan 22, 2005

Gopherslayer
:rock:
Fallen Rib

Tom Guycot posted:

In VR you basically have to play in supersperg sim mode, the casual modes will just get you destroyed by everyone flying with kb+m on a monitor. I quite enjoy it though, playing with a hotas setup at least, and I think the slower pace of sim mode where people aren't pulling off insane mouse aiming shots sucks you into the immersion more too.

It's good because supersperg sim mode is the only fun part of War Thunder. :colbert:

Pi Mu Rho
Apr 25, 2007

College Slice

Tom Guycot posted:

Thanks, I actually found that link when I started researching this morning after the posts about it. There was also a great guide from the destinations folks that was linked on their workshop page. I'm actually trying a first test thats compiling or whatever in the trial of that photoscan program as I type this.

How long does it usually take to compile these scenes? I have a hunch I took way too many pictures from all the wrong positions. I saw the devs of that destinations program saying in their guide they had about 450 photos to capture that english church, while i ended up with 800 some from just my living room as a test...

Edit: also, I'm curious what quality settings you use in photoscan, say for that example you posted?

My original scan used about 150 photos, and that wasn't enough (or I didn't position them well enough.) The video I posted used 259 photos, but would definitely have benefitted from more, both in terms of the local detail and capturing more of the background.
Aligning the cameras in Photoscan and then generating the point cloud took, overall, about 6 hours. If you've got a decent GPU, link that in with OpenCL in the preferences ( DO NOT link any crappy onboard Intel GPU you may have. Doing so virtually killed my PC).
Generating the mesh doesn't take too long, and the texture takes a few minutes.

For settings, I tend to err on the low side. It's amazing how the human eye can overlook rough geometry if it has a photo texture on it. So low density point cloud, low-ish geometry, and I decimated the mesh down to 100k polys for that scene. Once I've finished cleaning it up in 3D Studio, the whole thing should only be 5-6000 polys.

I'd be intrigued to know how you get on with doing a room interior, that's on my list of things to try.
800 photos does seem like a hell of a lot, though. For an interior, you should stand with your back more or less against the wall, shooting the wall opposite. Sidestep a foot or two, take another shot, repeat until you hit a corner and take 3 or 4 shots while doing a quarter-circle so your back is to the adjacent wall. Do this until you're all the way around the room, then go and pick out features in more detail. I wouldn't expect more than about 200 photos, really.

Tom Guycot
Oct 15, 2008

Chief of Governors


Pi Mu Rho posted:

My original scan used about 150 photos, and that wasn't enough (or I didn't position them well enough.) The video I posted used 259 photos, but would definitely have benefitted from more, both in terms of the local detail and capturing more of the background.
Aligning the cameras in Photoscan and then generating the point cloud took, overall, about 6 hours. If you've got a decent GPU, link that in with OpenCL in the preferences ( DO NOT link any crappy onboard Intel GPU you may have. Doing so virtually killed my PC).
Generating the mesh doesn't take too long, and the texture takes a few minutes.

For settings, I tend to err on the low side. It's amazing how the human eye can overlook rough geometry if it has a photo texture on it. So low density point cloud, low-ish geometry, and I decimated the mesh down to 100k polys for that scene. Once I've finished cleaning it up in 3D Studio, the whole thing should only be 5-6000 polys.

I'd be intrigued to know how you get on with doing a room interior, that's on my list of things to try.
800 photos does seem like a hell of a lot, though. For an interior, you should stand with your back more or less against the wall, shooting the wall opposite. Sidestep a foot or two, take another shot, repeat until you hit a corner and take 3 or 4 shots while doing a quarter-circle so your back is to the adjacent wall. Do this until you're all the way around the room, then go and pick out features in more detail. I wouldn't expect more than about 200 photos, really.

I appreciate the tips, I'll have to give it a go again tomorrow when the sun is back out, trying it that way. I'm curious if this is going to come out at all, what I ended up doing, was reading on some postings about photogrammetry that you should have a roughly 60% overlap on pictures. Though that blog like most photogrammetry info i was able to find was geared at scanning one single object, not trying to scan a location. So I had my camera on a tripod and took, well, 800 some pictures 360 degrees at every angle down to up overlapping 60% from like, 5 different points in the room. I'm guessing its not even going to produce a usable result at all after hours of churning away :v:.

It's honestly amazing that scene you created was only 259 pictures and that many polys, for such a large area to have that detail. I guess out minds really do fill in a lot of the details. Can I ask what you used, or how it was exported to be viewed in VR?

Pi Mu Rho
Apr 25, 2007

College Slice

Tom Guycot posted:

I appreciate the tips, I'll have to give it a go again tomorrow when the sun is back out, trying it that way. I'm curious if this is going to come out at all, what I ended up doing, was reading on some postings about photogrammetry that you should have a roughly 60% overlap on pictures. Though that blog like most photogrammetry info i was able to find was geared at scanning one single object, not trying to scan a location. So I had my camera on a tripod and took, well, 800 some pictures 360 degrees at every angle down to up overlapping 60% from like, 5 different points in the room. I'm guessing its not even going to produce a usable result at all after hours of churning away :v:.

It's honestly amazing that scene you created was only 259 pictures and that many polys, for such a large area to have that detail. I guess out minds really do fill in a lot of the details. Can I ask what you used, or how it was exported to be viewed in VR?

It does sound like you overdid it slightly with the photos, definitely. I'd recommend starting with a simple pass that covers everything roughly and see how it turns out. Then redo it with more if necessary. For scenes with walls/floors etc, it's best to keep your pictures as planar as you can to reduce skewed/angled texturing.

For VR, I exported the mesh as an FBX, did some cleanup in 3D Studio and then imported the whole thing into Unity. Dropped in a first-person controller, added some invisible meshes for collision and that was it.

Tom Guycot
Oct 15, 2008

Chief of Governors


Pi Mu Rho posted:

It does sound like you overdid it slightly with the photos, definitely. I'd recommend starting with a simple pass that covers everything roughly and see how it turns out. Then redo it with more if necessary. For scenes with walls/floors etc, it's best to keep your pictures as planar as you can to reduce skewed/angled texturing.

For VR, I exported the mesh as an FBX, did some cleanup in 3D Studio and then imported the whole thing into Unity. Dropped in a first-person controller, added some invisible meshes for collision and that was it.

Well I'll be sure to post a video if I get any kind of usable result. How difficult is doing something with unity for someone that knows literally nothing about programming, or engines, or anything? Messing around with that destinations program on steam, seems fairly easy from their guide (https://developer.valvesoftware.com/wiki/Destinations/Creating_a_Destination#Creating_a_Destination_in_Destination_Workshop_Tools for anyone else curious), but it seems like unity would be a lot more versatile in what could be done.

Pi Mu Rho
Apr 25, 2007

College Slice

Tom Guycot posted:

Well I'll be sure to post a video if I get any kind of usable result. How difficult is doing something with unity for someone that knows literally nothing about programming, or engines, or anything? Messing around with that destinations program on steam, seems fairly easy from their guide (https://developer.valvesoftware.com/wiki/Destinations/Creating_a_Destination#Creating_a_Destination_in_Destination_Workshop_Tools for anyone else curious), but it seems like unity would be a lot more versatile in what could be done.

Well, I've been using it for years, and the early days were pretty hard work.
However, the process I described above is literally all I had to do for that demo. Import model, rotate and scale to suit. Add in the FPS controller, throw in a few planes and other 3D objects for collisions (and disable their renderers so they don't show) and that's it.
Obviously, all you can do with that is walk around. If you want any kind of interactivity, or additional things for ambience, you're going to have to dig a bit deeper. I'm happy to assist if you've got any questions, though.

Tom Guycot
Oct 15, 2008

Chief of Governors


Pi Mu Rho posted:

Well, I've been using it for years, and the early days were pretty hard work.
However, the process I described above is literally all I had to do for that demo. Import model, rotate and scale to suit. Add in the FPS controller, throw in a few planes and other 3D objects for collisions (and disable their renderers so they don't show) and that's it.
Obviously, all you can do with that is walk around. If you want any kind of interactivity, or additional things for ambience, you're going to have to dig a bit deeper. I'm happy to assist if you've got any questions, though.

Thanks, I appreciate the help. I'm really excited at the idea of this, I had purchased a 360 camera last year so I could capture some things back home for posterity in VR, and while its still usefull for taking 360 picture at events and such, that would be peanuts compared to being able to create a fully rendered rendition of my parents cabin for example. I seriously didn't even consider any of this, thinking it was impossible outside the realm of professional 10k dollar setups and stuff, until you posted about that test of yours.

Pi Mu Rho
Apr 25, 2007

College Slice

Tom Guycot posted:

Thanks, I appreciate the help. I'm really excited at the idea of this, I had purchased a 360 camera last year so I could capture some things back home for posterity in VR, and while its still usefull for taking 360 picture at events and such, that would be peanuts compared to being able to create a fully rendered rendition of my parents cabin for example. I seriously didn't even consider any of this, thinking it was impossible outside the realm of professional 10k dollar setups and stuff, until you posted about that test of yours.

I tried to get into it a while ago, using a Structure sensor on an iPad, but it's pretty low resolution and not very reliable (loses tracking all the time) - photogrammetry gives a lot more flexibility. Obviously, there's a bit of a learning curve, and it helps if you have experience of 3D, but it is pretty accessible and the results are remarkably good. Best of luck with it!

Tom Guycot
Oct 15, 2008

Chief of Governors


Hmm, heres a question, and maybe this is just a result of the poor way I tried to photograph initially, but It just finished aligning the pictures, and after an hour and a half finished but says there are only 29 points, and 2 out of 819 photos aligned. I was wondering if you wouldn't mind answering a few questions about your workflow in photoscan. I was following a guide I found online, but it was focused on just scanning an object, like a bone or something not a scene. Anyways what I was following was basically this so far, and maybe you can see where I screwed up (unless it was just the pictures themselves):

-first i went to workflow to add the folder with the pictures. however when adding the photos it gives the option of 'create a camera from each file' or 'create multiframe cameras...', i chose the first option, but the guide I followed didn't mention this part, so I'm not sure if that was right.

-Then it had me go to align the photos, here i selected just low accuracy (didn't want to spend hours and hours extra on a test), pair preselection disabled, then left key point limit at 40,000, and tie point limit at 100, the defaults.

It churned away at that for an hour and half, thats where I got to the point i am now where it says 29 points, 2/819 photos aligned. After this I was supposed to clean up and optimize the space point cloud before making the dense one, then mesh. Obviously I'm not getting to those steps with where I'm at, but would this have been the proper steps to take assuming my photos had been taken correctly?

Pi Mu Rho
Apr 25, 2007

College Slice

Tom Guycot posted:

Hmm, heres a question, and maybe this is just a result of the poor way I tried to photograph initially, but It just finished aligning the pictures, and after an hour and a half finished but says there are only 29 points, and 2 out of 819 photos aligned. I was wondering if you wouldn't mind answering a few questions about your workflow in photoscan. I was following a guide I found online, but it was focused on just scanning an object, like a bone or something not a scene. Anyways what I was following was basically this so far, and maybe you can see where I screwed up (unless it was just the pictures themselves):

-first i went to workflow to add the folder with the pictures. however when adding the photos it gives the option of 'create a camera from each file' or 'create multiframe cameras...', i chose the first option, but the guide I followed didn't mention this part, so I'm not sure if that was right.

-Then it had me go to align the photos, here i selected just low accuracy (didn't want to spend hours and hours extra on a test), pair preselection disabled, then left key point limit at 40,000, and tie point limit at 100, the defaults.

It churned away at that for an hour and half, thats where I got to the point i am now where it says 29 points, 2/819 photos aligned. After this I was supposed to clean up and optimize the space point cloud before making the dense one, then mesh. Obviously I'm not getting to those steps with where I'm at, but would this have been the proper steps to take assuming my photos had been taken correctly?

I'm pretty sure I read the same tutorial once - guy with a dinosaur bone?

When I import the folder of photos, it doesn't give me any options, it just goes ahead and does it. For Align Photos, I go with medium and leave the rest untouched.
Opening up my project, I have 259/259 cameras aligned, so it suggests that it's not liking your photos for some reason. It's a bit of a galling prospect, but I'd suggest going and taking ~50 photos with as much blanket coverage as you can and import those as a new project. Try the same workflow and see if it works any better.

somethingawful bf
Jun 17, 2005

Kazy posted:

Good thing someone here has a Touch and is going to do some testing once they get their USB3 extender in :v:

:woop:

Tom Guycot
Oct 15, 2008

Chief of Governors


Pi Mu Rho posted:

I'm pretty sure I read the same tutorial once - guy with a dinosaur bone?

When I import the folder of photos, it doesn't give me any options, it just goes ahead and does it. For Align Photos, I go with medium and leave the rest untouched.
Opening up my project, I have 259/259 cameras aligned, so it suggests that it's not liking your photos for some reason. It's a bit of a galling prospect, but I'd suggest going and taking ~50 photos with as much blanket coverage as you can and import those as a new project. Try the same workflow and see if it works any better.

Haha, yeah thats the dinosaur one. I figured it was probably my pictures, but I thought I would check if it looked like something else was screwing me up.

Tom Guycot
Oct 15, 2008

Chief of Governors


Holy crap it worked! I think I screwed up some setting when I tried it the first time, but I tried it again with everything set back to defaults and selected just a section instead of the whole 360 pictures from all 5 points, and it bloody well worked! it had enough detail to even get the elevation rise of the god drat cable laying across the grounds from a controller. I'm absolutely blown away by what this software can do and how easy it was in the end. Now I have to try the full scene again, then exporting it and seeing if i can get it integrated somewhere where i can view it in vr, but, holy crap just looking at the dense point cloud, and 3d model in the software is amazing!

Hmm, the only thing i'm not happy with is the "blobby" nature of every flat plane, it all looks like a gravel road or something. Can this be solved by going with a lower amount of faces to it? I'm excited to find out! Everyone should be photoscanning poo poo, holy crap.

EdEddnEddy
Apr 5, 2012



https://twitter.com/RtoVR/status/743823423437148160

KakerMix
Apr 8, 2004

8.2 M.P.G.
:byetankie:

This is the part where Oculus whips off their hat and stomps on it

Pi Mu Rho
Apr 25, 2007

College Slice

Tom Guycot posted:

Holy crap it worked! I think I screwed up some setting when I tried it the first time, but I tried it again with everything set back to defaults and selected just a section instead of the whole 360 pictures from all 5 points, and it bloody well worked! it had enough detail to even get the elevation rise of the god drat cable laying across the grounds from a controller. I'm absolutely blown away by what this software can do and how easy it was in the end. Now I have to try the full scene again, then exporting it and seeing if i can get it integrated somewhere where i can view it in vr, but, holy crap just looking at the dense point cloud, and 3d model in the software is amazing!

Hmm, the only thing i'm not happy with is the "blobby" nature of every flat plane, it all looks like a gravel road or something. Can this be solved by going with a lower amount of faces to it? I'm excited to find out! Everyone should be photoscanning poo poo, holy crap.

Nice work!

And yeah, it all gets blobby. You can try decimating the mesh to get fewer triangles and flatter(ish) surfaces, but that's the point where I usually export it to 3D Studio and start editing the mesh manually.

EdEddnEddy
Apr 5, 2012



NM Reading the full page helps... It's been a long week.

Also while this process uses just a normal camera for capture right? Would it have any benefit from using a 3D camera for actual depth sensing or anything? (Could an old EVO 3D be of any use outside of the occasional nifty 3D picture? lol)

Pi Mu Rho
Apr 25, 2007

College Slice

EdEddnEddy posted:

NM Reading the full page helps... It's been a long week.

Also while this process uses just a normal camera for capture right? Would it have any benefit from using a 3D camera for actual depth sensing or anything? (Could an old EVO 3D be of any use outside of the occasional nifty 3D picture? lol)

You can already set it up to use two cameras to take stereoscopic image pairs for each position, so I suspect it would be similar. There's some benefit to it, but I wouldn't be able to say how much until I try it out.
Honestly, you can use it with just about anything. There's a scene on Valve's Destinations of popcorn on a table, at super large scale, that was done using Photoscan and an iPhone. As long as your photos are all from the same (or identical) sources, it can work with them.
Obviously, decent lenses make a big difference, depending on what you're shooting.

Tom Guycot
Oct 15, 2008

Chief of Governors


Pi Mu Rho posted:

Nice work!

And yeah, it all gets blobby. You can try decimating the mesh to get fewer triangles and flatter(ish) surfaces, but that's the point where I usually export it to 3D Studio and start editing the mesh manually.

3d Studio, as in 3d studio max? I see you can get that for free as an education license. I've never worked with a 3d modeling program before, how simple is it to clean up one of these scans?

EdEddnEddy posted:

NM Reading the full page helps... It's been a long week.

Also while this process uses just a normal camera for capture right? Would it have any benefit from using a 3D camera for actual depth sensing or anything? (Could an old EVO 3D be of any use outside of the occasional nifty 3D picture? lol)

I don't know enough about 3d cameras to know how that would work, but I guess it would depend on how high quality the individual pictures it takes are. If the pictures from each lens are sufficient quality, i would think it would be quite uesful. I used a 16 megapixel aps-c camera with a 35mm lens, ISO all the way down, f/9, and a longer shutter to compensate for the light levels. The quality I got, at least in this test, with a textured mesh looks absolutely fantastic, though I won't have a chance to even try doing to following steps of trying to import it into something that can view in VR until tomorrow. Some of the guides online I was reading though about it, seemed to say you should at least have an 8mp camera, but even a modern decent phone should work.

edit: Oh, does the evo 3d save the pictures just as twin jpgs, or some weird proprietary 3d format of some kind? That could cause a problem potentially. Honestly I say just give it a go, try the photoscan program, do something small take a few pictures, and run it through. It would take you a few minutes instead of guessing about it. I think you'll be amazingly surprised.

Tom Guycot fucked around with this message at 18:54 on Jun 17, 2016

EdEddnEddy
Apr 5, 2012



I may have to tinker with it, but due to my current life time constraints already I don't get near enough time to just enjoy VR let alone start trying to create stuff.

I really need to find a job that is more open to Working at Home or at least working with all this tech so I can explore and work toward the future of VR, rather then currently just securing internal IT assets, but overall having little access to my own stuff/hardware or any VR hardware for that matter, for extended periods of time. Ugh

HTC Get back to me for that drat Management Trainee Position. :argh:

Pi Mu Rho
Apr 25, 2007

College Slice

Tom Guycot posted:

3d Studio, as in 3d studio max? I see you can get that for free as an education license. I've never worked with a 3d modeling program before, how simple is it to clean up one of these scans?


Yep, that 3D studio. You can apparently clean up the mesh and reimport it into photoscan and reproject the textures, but I haven't got that to work yet.
What I do in 3D Studio is largely selecting all the polygons or vertices in a plane (like a wall) and then making them planar so they're all level. Then move/rotate it as necessary so it fits the original geometry. After that, I push/pull/extrude parts like windows to give a bit more depth. It's a long and slow process, though, and especially hard with very high-resolution meshes.
I was experimenting with render to texture, where you take part of your mesh and essentially project it onto a lower poly surface, but it seems to degrade the texture quality significantly.
I'll get the hang of it eventually, it's just a fair bit of trial and error at the moment.

Tom Guycot
Oct 15, 2008

Chief of Governors


Pi Mu Rho posted:

Yep, that 3D studio. You can apparently clean up the mesh and reimport it into photoscan and reproject the textures, but I haven't got that to work yet.
What I do in 3D Studio is largely selecting all the polygons or vertices in a plane (like a wall) and then making them planar so they're all level. Then move/rotate it as necessary so it fits the original geometry. After that, I push/pull/extrude parts like windows to give a bit more depth. It's a long and slow process, though, and especially hard with very high-resolution meshes.
I was experimenting with render to texture, where you take part of your mesh and essentially project it onto a lower poly surface, but it seems to degrade the texture quality significantly.
I'll get the hang of it eventually, it's just a fair bit of trial and error at the moment.

Well I can't wait to see how you go on that. I'm thoroughly pumped up about this photogrammetry now.

ChickenArise
May 12, 2010

POWER
= MEAT +
OPPORTUNITY
= BATTLEWORMS
I was lucky enough to spend a year at a school that had 3DSMax for student use (2001ish) and I feel like that has served me well now that I've dipped my toe into VR tinkering.

Hierophant
Oct 21, 2007
For photogrammetry there's also Autodesk's 123d catch and Remake, although I've found that both are tuned towards capturing a central object that the camera orbits around rather than an entire scene that surrounds the camera. Having a stereo pair doesn't help much because most of these software packages will treat it as just being twice as many photos.

somethingawful bf
Jun 17, 2005
Since Touch is working with SteamVR now there are a lot of dev posts on /r/oculus talking about/showing it's capability. As many of us assumed, Touch works fine in 360 roomscale when you set up the cameras like you would Lighthouses, there is no weird occlusion issues and the range is fine as well. Get hype :)

KakerMix
Apr 8, 2004

8.2 M.P.G.
:byetankie:

Poetic Justice posted:

Since Touch is working with SteamVR now there are a lot of dev posts on /r/oculus talking about/showing it's capability. As many of us assumed, Touch works fine in 360 roomscale when you set up the cameras like you would Lighthouses, there is no weird occlusion issues and the range is fine as well. Get hype :)

Sure would be a thing if Oculus room scale games only come out on Steam because it isn't what Oculus is "targeting"

Adbot
ADBOT LOVES YOU

Bremen
Jul 20, 2006

Our God..... is an awesome God

Poetic Justice posted:

Since Touch is working with SteamVR now there are a lot of dev posts on /r/oculus talking about/showing it's capability. As many of us assumed, Touch works fine in 360 roomscale when you set up the cameras like you would Lighthouses, there is no weird occlusion issues and the range is fine as well. Get hype :)

Well, some devs have said it has trouble, to be fair. But it at least appears to mostly work.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply