Research Paper

Sadjad Fouladi, Brennan Shacklett, Fait Poms, Arjun Arora, Alex Ozdemir, Deepti Raghavan, Pat Hanrahan, Kayvon Fatahalian, Keith Winstein, “R2E2: Low-Latency Path Tracing of Terabyte-Scale Scenes using Thousands of Cloud CPUs,” ACM Transactions on Graphics, July 2022.

@article{fouladi2022r2e2,
author = {Fouladi, Sadjad and Shacklett, Brennan and Poms, Fait and Arora, Arjun and Ozdemir, Alex and Raghavan, Deepti and Hanrahan, Pat and Fatahalian, Kayvon and Winstein, Keith},
title = {R2E2: Low-Latency Path Tracing of Terabyte-Scale Scenes Using Thousands of Cloud CPUs},
year = {2022},
issue_date = {July 2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {41},
number = {4},
issn = {0730-0301},
url = {https://doi.org/10.1145/3528223.3530171},
doi = {10.1145/3528223.3530171},
abstract = {In this paper we explore the viability of path tracing massive scenes using a "supercomputer" constructed on-the-fly from thousands of small, serverless cloud computing nodes. We present R2E2 (Really Elastic Ray Engine) a scene decomposition-based parallel renderer that rapidly acquires thousands of cloud CPU cores, loads scene geometry from a pre-built scene BVH into the aggregate memory of these nodes in parallel, and performs full path traced global illumination using an inter-node messaging service designed for communicating ray data. To balance ray tracing work across many nodes, R2E2 adopts a service-oriented design that statically replicates geometry and texture data from frequently traversed scene regions onto multiple nodes based on estimates of load, and dynamically assigns ray tracing work to lightly loaded nodes holding the required data. We port pbrt's ray-scene intersection components to the R2E2 architecture, and demonstrate that scenes with up to a terabyte of geometry and texture data (where as little as 1/250th of the scene can fit on any one node) can be path traced at 4K resolution, in tens of seconds using thousands of tiny serverless nodes on the AWS Lambda platform.},
journal = {ACM Trans. Graph.},
month = {jul},
articleno = {76},
numpages = {12},
keywords = {lambda computing, ray tracing}
}

at a Glance

R2E2 is a parallel path renderer architectred to leverage elastic cloud platforms to scale to thousands of CPU cores and terabyte-scale scenes.

1
R2E2 decomposes the scene geometry and textures into tiny (100s of MB) objects, and puts them in a blob store.

2
At artist's request, the system quickly boots up thousands of tiny compute nodes.

3
Nodes fetch scene objects from the blob store, and become responsible for servicing thier objects.

4
Nodes cooperate and move rays among each other to perform standard fire-and-forget path tracing.

Demo Video

The video shows a real-time screen capture of rendering the Moana Island scene using R2E2 on 2,000 tiny nodes on AWS Lambda.

  • Each node features 4 GB of memory and 3 vCPUs; an aggergate of 8 TB of memory and 6,000 vCPUs.
  • The scene is decomposed into ∼1 GB objects and is stored on S3 object store.
  • For demo purposes, the upfront preprocessing has already been done and has generated the weights for scene objects.
  • When the job is invoked, R2E2 immediately boots up thousands of nodes from zero.

Tech Video

Tech video will be available late August!

Acknowledgements

We thank the ACM SIGGRAPH reviewers for their helpful comments and suggestions. We are grateful to Matt Pharr, Solomon Boulos, Feng Xie, and Marc Brooker for conversations and feedback. This work was supported in part by NSF grants 2045714, 2039070, 2028733, and 1763256, DARPA contract HR001120C0107, a Sloan Research Fellowship, and by Google, Huawei, VMware, Dropbox, Amazon, and Meta Platforms.

Contact Us

r2e2@cs.stanford.edu
Stanford Computer Science, 353 Jane Stanford Way, Stanford, CA