Friday, March 22, 2019

The Cemetery Hill - Final write up



Above: a screencap of the cemetery hill environment in Unity.

  • The fence is a geo asset reconstructed from scanned object data
  • The ground is a tileable material asset reconstructed scanned ground data
  • Audio is white noise was recorded same day on location
  • A particle emitter to mimic seedlings being blown off a tree (documented from that day) to demonstrate dynamic shadows
  • Statue is a previously scanned asset (reconstructed from scanned object data) which is NOT true to the location to demonstrate combining scanned assets
  • a default cube, for lighting comparison
  • Lighting setup is a simple directional light and gradient skybox. 

Critical analysis:


Bad input = bad output
  • There are obvious issues in model and texture quality, which can only be solved with better data capture. Admittedly, I began shooting this location in the late afternoon, which made me feel rushed. The benefit was that it was cloudier (therefore, more diffuse lighting) but I relied too much on burst shutter mode and was sloppy in photographing.
  • This brings up an important point: I wish I spent more time in the data capture phase of my research for this term. I don't feel closer to perfecting the (already documented) methods, instead I felt rushed to output assets and get back into software side. In the end I didn't feel ready to talk about rendering because I was being sloppy upfront and not testing all the tools I have available to me. Bad input = bad output, and there are no good renders without good data.
The tileable material doesn't look like ground
  • Yeah, see above. Need a tripod, manual shutter, and more time.
  • There was also a lot of wind that day, which is not an ideal condition.
The statue looks out of place
  • This is another case of bad input = bad output, but I can get a little more specific here. First, this was shot on my phone instead of a proper DSLR. The albedo was not completely clean and color matched to the scene, however with more time both of these can be finessed to perfection. In my last post I demonstrate how I can use Substance Designer to achieve a clean albedo, then reuse the template on different assets.
The seedlings are just planes
  • I know, it's placeholder. This kind of asset is an example of something that would be better box modeled, and for this demo that's a skill I didn't think was necessary to demonstrate. Plus, the real reason this was included was to show something animated with dynamic shadows.

Successes:

I'm happy with the way the space is taking shape
  • The organic feeling (that usually takes SO long to replicate) is almost instant in scanning. The space has a distinct 'character' to it that feels natural. Popping the white noise track in made the whole scene come alive. Plus, the process on this project has been FAST, this is the product of less than a total week of work hours. 
I'm happy with the direction of my research
  • I am starting to see the path of how to get from where I currently am to here https://www.youtube.com/watch?v=hCeP_XUIB5U. I think it's important to emphasize that I am still replicating these scanning methods (which Unity has thoroughly documented and published). It turns out that there are already some pretty big shoulders to stand on. 
  • I've greatly enjoyed working closer with Unity and realtime art-- something I was never formally trained in and want to continue learning. 
  • Posting about my work has helped me get involved in other scanning projects around the school, and there does not seem to be any shortage of them. 
--

Looking ahead to next term, I hope to be able to increase the quality of scans and get results that are closer to the Unity Fontainebleu demo. 
I would also like to define a more specific and meaningful production that would fit into the thesis model.
I am not so worried about producing an effective tech demo as I am about making interesting interactive art. One fear I have is that I am entering my third term without having any discussions about art making and art direction. I hope this is to come before its time to propose our thesis. 
For that reason, next term I am hoping to continue researching best practices in scanning tech, but also look into areas such as installation art, interactive art, VR. 

Monday, March 18, 2019

The Cemetery Hill - proof of tech

One thing that I like about working with scanning is that it is easy to work iteratively and BIG to SMALL. It's easy to quickly generate a sparse pointcloud of the entire space (esp with 360 cameras), then refine per object to smaller, denser pointclouds. Then lastly, each asset can be optimized and made beautiful for a realtime environment. 

I feel good about ending out the term with a workflow that makes sense and can show results in just a couple of days. The Cemetery Hill project is just at the beginning, but the proof of tech can already be presented.

Putting it all together


On my fourth day working on the new dataset from the graveyard-- this process is so fast. 

In my last post I experimented with using control points to merge multiple chunks of data. There were two methods (same steps, different order of operations) and I predicted that recalculating the entire workspace with all cameras included would produce better results. I did that, turned up the settings to high details, and let it process overnight.
As you can see, the results are gorgeous 😍

To compare, here is the dense pointcloud processed with default settings and without recalculating the entire workspace. 

And from another angle







Sunday, March 17, 2019

Even more on de-lighting

I wanted to build on my notes from before, to inlcude what I've worked with over the term.

Capture Best Practices

Good input = good output
It helps to shoot subjects on cloudy, overcast days. Or with diffuse lighting.
There is a tradeoff however, sometimes harsh lighting helps generate a better mesh or better surface details.

The stitching software does some amount of color correction. This is all under the hood.

Geometry Dependent Methods

The 'traditional' and more accurate way to delight a base color map would be to totally recreate the lighting conditions, to match all the realworld shadow data, then cancel them out. However, researching other methods has led me to believe that not only is this tedious, it is no longer industry standard.

Image only Methods - stripping light/shadow from base color map

Saturday, March 16, 2019

Material Scanning

Notes from this tutorial:
https://www.youtube.com/watch?v=kDMz6djZCkc
https://www.youtube.com/watch?v=dcswz_vYjiQ
https://www.youtube.com/watch?v=1W2VWCehgGg


The end results are technically correct-- tileable PBR maps all exported. They're even de-lit and color corrected.
However, the input data was sloppy. Would like to shoot again with a tripod.

See my notes after the jump.

Merging Multiple Scans with Control Points

I have begun again with a new location. This time, atop a hill in a cemetary.


Tuesday, March 12, 2019

After Class, wk ??

"executive summary" of term
downloadable video (screengrabs fine)
write-up
something that makes sense to the public
email by next friday

not here to demonstrate polish. demonstrate the full pipeline + tools
maybe write up delighting processes?
doesn't matter if using new data set or not

for outline
the headings/subheadings should tell a story
big to small. big ideas first

Saturday, March 9, 2019

Prototype: The Smoker Deck

Download here for Android + Google Cardboard

Seperated Mesh
https://drive.google.com/open?id=1hNrkL4TSNNOvP-teHeqQT2yeuT_mvdsO

Single Mesh
https://drive.google.com/open?id=17ehS4F1MONKlktgRxc04yzF4QHO7qXXQ


A gallery of failures! and a write-up on this term

I am essentially confirming to have reached a dead end with my separated mesh idea! So I used the auto mesh decimation tools and made a singular mesh version as well.

Both reconstructions of the data are, well, bad.

Here is the separated mesh in Unity


And here is the single decimated mesh in Unity




Friday, March 8, 2019

Following the Unity Documentation

My goal is to bring everything into Unity and experiment with the data in a game rendering environment. According to my notes these are my next steps:

  • Export high res mesh, lo res unwrapped mesh, and re-projected color map out of Zephyr
  • Use something to bake mesh maps (Substance, xNormal) 
  • De-light the color map (Unity plugin, Photoshop, both)
  • If applicable, develop tileable maps for custom material**
  • Bring all into game engine

I keep getting stuck on little details within the process. Export settings, incompatible file formats, etc. But I have a bit of a mental block to get over as well.

Unity Photogrammetry Guide

I have mentioned this talk before: https://www.youtube.com/watch?v=Ny9ZXt_2v2Y
and I have even mentioned the delighting tool that this team published.

But I hadn't put two and two together. Turns out there is an AMAZING photogrammetry guide that was published under this Unity research team. It is over 40 pages of best practices, equipment, and general documentation. https://unity.com/solutions/photogrammetry

And just look at their tech demo 😍 https://www.youtube.com/watch?v=q2Z1oiFDKI0


Monday, March 4, 2019

After Class wk09

Notes from after class.

Pointcloud Meshing with Seperations, pt3

Evaluating my progress on the Smoker Deck environment...

Digital Artist Residencies

I didn't learn what an artist residency was until I started working for an art college. A residency is essentially an opportunity granted to artists to create work. It may require a fee or it may provide funding to the artist. It may be on location (many involve staying in a shared communal housing) or remote. It may be based around a specific discipline or prompt. It may also expect a contribution from the artist, such as a lecture or work of art.

These programs tend to be geared towards professionals with a working portfolio, and may expect a BFA/MFA. They are different from an internship program in that they are not necessarily training their candidates for a job, but instead they tend to encourage self directed work or may be community focused.

Quain Courtyard pt 4, mild success