Eric Enderton



DigiPro 2014 CFP: Submit your new techniques for computer graphics film production to DigiPro 2014, the Digital Production Symposium. Again this year, DigiPro will be the Saturday before SIGGRAPH, August 9, 2014, in Vancouver. Submission deadline is May 16, 2014.

The Pixar keynote at GTC will show Pixar's interactive lighting preview tool, built on the NVIDIA OptiX engine for GPU ray tracing, as well as real-time feedback for character animation. By Dirk Van Gelder and Danny Nahmias. Weds., March 26, 2014.

Legion shows you how to do physically based rendering on the GPU. It's an open-source Monte Carlo ray tracing renderer, using the NVIDIA OptiX engine for GPU ray tracing. By Keith Morley.



DigiPro Advisory Board member. Digital Production Symposium. August 9, 2014, Vancouver, Canada.
FMX Program Board member. Conference on Animation, Effects, Games and Transmedia. April 22-25, 2014, Stuttgart, Germany.
JCGT Editorial Board member. Journal of Computer Graphics Techniques.



DigiPro 2013 logo

  Proceedings of DigiPro 2013, the Digital Production Symposium
July 20, 2013
Program Chairs: Eric Enderton, Nafees Bin Zafar. Conference Chairs: Larry Cutler, Doug Epps.
[Web site] [ACM DL]
Conference: Papers and talks on character animation, large fluid simulations, set capture, stereo, and more. Keynote by Pixar's Tony DeRose. Click on the logo to see the program.

DigiPro 2012 logo

  Proceedings of DigiPro 2012, the Digital Production Symposium
August 4, 2012
Program Chairs: Eric Enderton, Larry Cutler. Conference Chairs: Armin Bruderlin, Ken Anjyo.
[Web site] [ACM DL]

Conference: Ten original papers on computer graphics production, kicked off by a keynote by Joe Letteri of Weta Digital. Papers on fire, smoke, crowds, faces, stereo and more. The 2012 Digital Production Symposium was a rousing success, with film industry developers, artists, researchers, and students -- 128 attendees in all -- coming together at the DreamWorks Animation studio, as an official SIGGRAPH 2012 co-located event.

The Digital Production Symposium encourages the sharing of algorithms, procedures and insights for the production of top quality visual effects and computer animation. The goals are to bring together scientists, engineers, artists and producers, and to close the gap between research results and industry needs.

Paper thumbnail

  The Workflow Scale: Why 5x Faster Might Not Be Enough
Eric Enderton, Daniel Wexler
Computer Graphics International Workshop on VFX, Computer Animation, and Stereo Movies, 2011
[PDF] [Discussion] [Slide]

Abstract: This essay discusses qualitative versus quantitative accelerations of user tasks, in the context of computer animation production. A workflow regime is defined as a range of system response times in which the artist's relationship to the task is qualitatively similar. Radical new technology is much more likely to succeed when it brings an artist's workflow into a new regime, providing a discontinuous improvement in efficiency and final image quality. More modest technology revisions can supply smaller speed-ups, but these are merely consumed by Blinn's Law, i.e., the tendency for system response times to remain constant as technology improves, due to increased input complexity. We propose a list of workflow regimes and their ranges of response time.

Paper thumbnail

  Stochastic Transparency
Eric Enderton, Erik Sintorn, Peter Shirley, David Luebke
IEEE Transactions on Visualization and Computer Graphics (TVCG), August 2011.
Original, shorter version: Symposium on Interactive 3D Graphics and Games (I3D) 2010
(Winner of Best Paper Award. Image selected for proceedings cover.)
[PDF] [Slides] [Videos] [Code from DX SDK]

Abstract: Stochastic transparency provides a unified approach to order-independent transparency, anti-aliasing, and deep shadow maps. It augments screen-door transparency using a random sub-pixel stipple pattern, where each fragment of transparent geometry covers a random subset of pixel samples of size proportional to alpha. This results in correct alpha-blended colors on average, in a single render pass with fixed memory size and no sorting, but introduces noise. We reduce this noise by an alpha correction pass, and by an accumulation pass that uses a stochastic shadow map from the camera. At the pixel level, the algorithm does not branch and contains no read-modify-write loops other than traditional z-buffer blend operations. This makes it an excellent match for modern massively parallel GPU hardware. Stochastic transparency is very simple to implement and supports all types of transparent geometry, able without coding for special cases to mix hair, smoke, foliage, windows, and transparent cloth in a single scene.

Paper thumbnail

  A Local Image Reconstruction Algorithm for Stochastic Rendering
Peter Shirley, Timo Aila, Jonathan Cohen, Eric Enderton, Samuli Laine, David Luebke, Morgan McGuire
Symposium on Interactive 3D Graphics and Games (I3D) 2011

Abstract: Stochastic renderers produce unbiased but noisy images of scenes that include the advanced camera effects of motion and defocus blur and possibly other effects such as transparency. We present a simple algorithm that selectively adds bias in the form of image space blur to pixels that are unlikely to have high frequency content in the final image. For each pixel, we sweep once through a fixed neighborhood of samples in front to back order, using a simple accumulation scheme. We achieve good quality images with only 16 samples per pixel, making the algorithm potentially practical for interactive stochastic rendering in the near future.

Paper thumbnail

  Colored Stochastic Shadow Maps
Morgan McGuire, Eric Enderton
Symposium on Interactive 3D Graphics and Games (I3D) 2011
[PDF] [Slides] [Video and More]

Abstract: This paper extends the stochastic transparency algorithm that models partial coverage to also model wavelength-varying transmission. It then applies this to the problem of casting shadows between any combination of opaque, colored transmissive, and partially covered (i.e., alpha-matted) surfaces in a manner compatible with existing hardware shadow mapping techniques. Colored Stochastic Shadow Maps have a similar resolution and performance profile to traditional shadow maps, however they require a wider filter in colored areas to reduce hue variation.

Paper thumbnail

  Real-Time Stochastic Rasterization on Conventional GPU Architectures
Morgan McGuire, Eric Enderton, Peter Shirley, David Luebke
High Performance Graphics 2010
(Second Place for Best Paper Award.)
[PDF] [Slides and more]

Abstract: This paper presents a hybrid algorithm for rendering approximate motion and defocus blur with precise stochastic visibility evaluation. It demonstrates---for the first time, with a full stochastic technique---real-time performance on conventional GPU architectures for complex scenes at 1920x1080 HD resolution. The algorithm operates on dynamic triangle meshes for which per-vertex velocity or corresponding vertices from the previous frame are available. It leverages multisample antialiasing (MSAA) and a tight space-time-aperture convex hull to efficiently evaluate visibility independently of shading. For triangles that cross z=0, it fall backs to a 2D bounding box that we hypothesize but do not prove is conservative. The algorithm further reduces sample variance within primitives by integrating textures according to ray differentials in time and aperture.

Paper thumbnail

  Efficient Rendering of Human Skin
Eugene d'Eon, David Luebke, Eric Enderton
Eurographics Symposium on Rendering 2007    (Image selected for proceedings cover.)
[PDF] [Demo]

Abstract: Existing offline techniques for modeling subsurface scattering effects in multi-layered translucent materials such as human skin achieve remarkable realism, but require seconds or minutes to generate an image. We demonstrate rendering of multi-layer skin that achieves similar visual quality but runs orders of magnitude faster. We show that sums of Gaussians provide an accurate approximation of translucent layer diffusion profiles, and use this observation to build a novel skin rendering algorithm based on texture space diffusion and translucent shadow maps. Our technique requires a parameterized model but does not otherwise rely on any precomputed information, and thus extends trivially to animated or deforming models. We achieve about 30 frames per second for realistic real-time rendering of deformable human skin under dynamic lighting.

Paper thumbnail
(Image by Tweak Films.)

  GPU-Accelerated High Quality Hidden Surface Removal
Daniel Wexler, Larry Gritz, Eric Enderton, Jonathan Rice
Graphics Hardware 2005    (Image selected for proceedings cover.)

Abstract: High-quality off-line rendering requires many features not natively supported by current commodity graphics hardware: wide smooth filters, high sampling rates, order-independent transparency, spectral opacity, motion blur, depth of field. We present a GPU-based hidden-surface algorithm that implements all these features. The algorithm is Reyes-like but uses regular sampling and multiple passes. Transparency is implemented by depth peeling, made more efficient by opacity thresholding and a new method called \emph{z batches}. We discuss performance and some design trade-offs. At high spatial sampling rates, our implementation is substantially faster than a CPU-only renderer for typical scenes.

Paper thumbnail
(Image by Frantic Films.)
  High-Quality Antialiased Rasterization
Daniel Wexler, Eric Enderton
GPU Gems II, chapter 21, 2005
[Web page] [source code]

Abstract: Finely detailed 3D geometry can show significant aliasing artifacts if rendered using native hardware multisampling, because multisampling is currently limited to one-pixel box filtering and low sampling rates. This chapter describes a tiled supersampling technique for rendering images of arbitrary resolution with arbitrarily wide user-defined filters and high sampling rates. The code presented here is used in the Gelato film renderer to produce images of uncompromising quality using the GPU.

Link du jour: Regarding on-line reviews