Friday, March 8, 2019

Following the Unity Documentation

My goal is to bring everything into Unity and experiment with the data in a game rendering environment. According to my notes these are my next steps:

  • Export high res mesh, lo res unwrapped mesh, and re-projected color map out of Zephyr
  • Use something to bake mesh maps (Substance, xNormal) 
  • De-light the color map (Unity plugin, Photoshop, both)
  • If applicable, develop tileable maps for custom material**
  • Bring all into game engine

I keep getting stuck on little details within the process. Export settings, incompatible file formats, etc. But I have a bit of a mental block to get over as well.

**Here's the part that gets me. I messed up thinking "accuracy over taste" , and have been trying to recreate an environment based directly on the pointcloud data. What I should be doing is prioritizing taste over accuracy. The Unity team emphasized NOT trying to collect large amounts of spatial data (lol) and instead breaking an environment up into collectable parts. I understood this to some extent when I started, but my sense of scale was off. I had hoped to capture a completely blank slate of an environment and fill it in later.

If you're wondering where I get the phrase taste over accuracy from, it's from The Spiderverse team:
https://www.youtube.com/watch?v=vDjvhwgbsP8


If I had more time I would want to collect my data differently, instead prioritizing tileable data and studying how to create tileable and maybe procedural materials. For example, in their tech demo, the majority of the ground is a 'ground elements' material that was captured from a 2m x 2m patch of grass.
I think the pointcloud I have now is perhaps best for rough alignment.



I've compiled a list of different categories of objects and their best capture method (to my knowledge). Before planning a shoot, it is important to itemize the assets and types of data based on these categories (which I did not).

  • Standard prop
    • Medium to large size objects
    • walkaround Three ring capture
    • Output geo and maps
  • Small prop
    • small handheld things
    • controlled three ring capture, light tent, tripod
  • Tileable textures
    • walls, ground, high detail
    • Zig zag pattern
    • output reuseable material and tileable maps. 
    • may combine with prop geo, or simple structural environment
  • Foliage
    • plants, leaves, feathers
    • Flat scan
    • output simple geo and maps
  • Huge assets
    • building exteriors
    • Drone video variable pattern
    • output geo and maps
  • 'Rough alignment'
    • could use 360 camera / video to generate a less dense pointcloud for reference/measurement



I am wondering if I should close the smoker deck chapter here, and begin again with better evaluation of my next shoot.

Although since I have the manual retopo mesh already, maybe I can combine that with some of these ideas...

This also forces me to question if my interest in reality capture resource is heavier on the acquisition side. The idea of optimizing data capture (rather than post processing) is more appealing to me.

It's week 9 and my mind is all over the place. One thing I want to keep in mind is that the world of reality capture is vast and there seems to be exponential directions of research and application.





No comments:

Post a Comment

Quain Courtyard pt 4, mild success