FullSizeRender-3 FullSizeRender-2

Hello Siggraph! Hello Anaheim! Hello palm trees~

Day one begins with a so-so panel of Siggraph intros which aims at maximizing the Siggraph experience. It’s a fair talk because the speakers either spoke very low or very loud, plus the slides didn’t always match what they present. I don’t think they take enough time to prepare. But the panel does give some useful information. Like the must-see events which only happen once in your life (e.g. technical papers fast forward, real-time live), networking events, etc. So it still worths the time for the first-time attendees to get a general sense. Otherwise, it’s easy to plan out based on previous Siggraph experience since the structure of Siggraph rarely change. I’m looking for interesting topics with technical details, so I just looked at courses and panels of the highest registration level and picked the ones I’m interested in.

The most excited event in the evening is Technical Papers Fast Forward!

This two-hour event is a great and fun show to present all the Siggraph papers of this year, when each paper only has 30 seconds~ My favorite one extracts the style of any painting and apply it to a regular photo: Painting Style Transfer for Head Portraits Using Convolutional Neural Networks by Ahmed Selim, et al. That’s a typical example of art & science combination~ Say that you want to color your face with Van Gogh’s The Starry Night, this is one option. Another art & science paper is about modeling 3D from 2D scratch: Modeling Character Canvases from Cartoon Drawings, by Mikhail Bessmeltsev, et al. A similar one is building urban models using sketching: Interactive Sketching of Urban Procedural Models, by Gen Nishida, et al.

There are some game-changing techniques this year. The most impressive one is watching 3D film without wearing glasses in the cinema: Large Scale Automultiscopic Display by Natalee Efrat, et al. This paper redesigned the display pipeline of 3D cinema so that you don’t need 3D glasses any more. Another one is sensing gestures using radar: Soli: Ubiquitous Gesture Sensing With Millimeter Wave Radar, by Mustafa Karagozler, et al. This is BIG! With gesture-sensing, technically any user interactions based on touching can be replaced! Imagine that when your fingers are wet that you cannot touch the screen, you can still interact using gestures!

One useful technique is replacing the style of the sky and adjust the scene accordingly for a photograph: Semantic-Aware Sky Replacement, by Yi-Hsuan Tsai, et al. I can imagine that tons of people would use it if they didn’t take satisfied photos on vacation. Though I’ll probably not use it because personally I’m serious on photography and have a high bias of “faking” photos using techniques…

One last thing to end the technical paper review: I’m glad to hear some papers doing modeling from natural language, that’s directly related to my side project~~ would love to see in the following days.

Frostbite rocks!

I also went to the Physically-Based Shading course. It’s 3.15 hours. It was super packed.. The first instructor Naty Hoffman from Lucasfilm opened the talk with massive math equations. Though I can roughly get the idea of the tricks and hacks of the BRDF models, the talk could be more amazing if there are more pictures corresponding to the math equations and if Naty talks slower.. There are A LOT attendees from abroad!! The slower, the clearer!

I’ll directly jump to the rock talk of Frostbite because I still remember their content. Maybe because I’m interested in their topics, maybe because I still had energies during their talk, or maybe they put more images on the slides, or a combination of the above.. So, Frostbite renders sky and atmosphere, sun and volumetric clouds in real-time. To me, this is insane because most traditional rendering happens indoor and it’s hard to achieve real-time performance of physically-based shading, let alone physically-based outdoor rendering. But they achieved real-time for outdoor scenes! The general idea of sky and atmosphere rendering is pre-computing the look up tables (LUTs) based on the weather of a scene, then applying the LUTs to render the scene along with the direction of the eyes and lights, and some other arguments that I don’t remember:p Perlin-Worley noise and two-lobe HG phase function are used for cloud volumetric cloud rendering. I didn’t get the idea of sun rendering, though.

It’s 0:15am pacific time, i.e. 3:15am eastern time. I’m so sleepy…

Advertisements