Wednesday, January 16, 2019

After class, week 2

Yesterday I received a lot of feedback chew on, considering we're entering week two, it's a good time to re-orient and to really evaluate if I'm working in the right direction.

There are essentially two sides to 3D scanning: data acquisition and processing (and post processing). 

On the data acquisition side, I can see where my tests are going. Continuing to become familiar with the equipment is important and I still would like to get a grasp on what tools are appropriate for when. I am still asking the question from my previous post, what are the advantages of laser scanners over photogrammetry, if any at all. I have not successfully captured a prop object yet.

That being said, perfecting the scanning process IS something I can continue to work on in parallel to other research. The processing side has many more unknowns to me. 

This might be a good time to post some of my references. 
  • First, virtualvizcaya.org has a great article on "What is 3D Documentation", which emphasizes the need for 3D documenting in cultural spaces + explains a range of 3D scanning/ capture techniques and their different scenarios. Not to mention it features a massive point cloud of the whole estate rendered right in your browser (note: WebGL with Potree).
  • Second, The Vanishing of Ethan Carter production blog boasted, in 2014, that they were starting a "visual revolution" for games. A small indie team utilized photogrammetry to capture real world assets and locations for their spooky walking simulator. (The game itself was rather disappointing! I have a lot of opinions about that need their own post.)
  • Lastly, Substance Painter has been publishing a lot of great tutorials on scanning processes as well, such as this one on recreating ivy leaves. In particular, I am excited to use their delighting tool to generate a true diffuse map out of photo data. 
Now for the "other" research... rendering. It is one thing to optimize the scan data into 3D polygonal models, for use in realtime game engines or prerendered footage, but in class Paul brought up a completely new concept to me: rendering points. There are engines that will directly render points instead of polys, an example would be the Vizcaya site above. This site uses Potree, an opensource WebGL renderer. While Potree is immediately noticeable low resolution, it IS impressive considering it's on web/mobile. The Vizcaya dataset is public domain as well, so one of my next steps might be downloading, installing and poking around in this existing project.

Rendering points is still so unfamiliar to me, I have no, ahem, point of reference to begin with. I need to start from scratch. Paul also mentioned Euclidean renderer, which has an impressively gossipy paper trail of information that can be found lurking around the web.  

 


No comments:

Post a Comment

Quain Courtyard pt 4, mild success