Week Overview

Keynotes / Fast-Forward Full Papers Short Papers Tutorials STARs Education Panels Poster Community Events
Week Overview
Time Monday Tuesday Wednesday Thursday Friday
09:00 09:30

FF

FF

FF

FF

09:30 10:30 Keynote 1 Keynote 2 Keynote 5 Keynote 6
10:30 10:45
10:45 11:00
11:00 12:15
12:15 12:30
12:30 12:45

She-Lunch

Best Paper Jury

12:45 13:00

Closing Session

13:00 13:15
13:15 13:45
13:45 14:00
14:00 14:45
14:45 15:00
15:00 15:30
15:30 16:00 Poster Q&A
16:00 16:30
16:30 17:00
17:00 17:30

Opening Session

17:30 17:45
17:45 18:00

EG General Assembly

18:00 18:45
18:45 19:00
19:00 19:30

Social Event

Fellows' Dinner

IPC Dinner

19:30 21:00

Public Lecture

21:00 22:00
22:00 22:30

Daily Program

Monday

09:00-10:30

10:45-12:15

13:15-14:45

15:00-16:30

17:00-18:00

Opening Session

19:00-22:30

Social Event

Tuesday

09:00-09:30

Fast-Forward

09:30-10:30

Keynote 1

  • George Drettakis The Quest for Easy Creation, Editing and Real-Time Rendering of Realistic 3D Scenes

11:00-12:30

14:00-15:30

16:00-17:30

19:00-22:00

Fellows' Dinner

Wednesday

09:00-09:30

Fast-Forward

09:30-10:30

Keynote 2

11:00-12:30

12:30-13:00

She-Lunch

14:00-15:30

15:30-16:00

Poster Q & A

16:00-17:30

17:45-18:45

EG General Assembly

19:30-21:00

Public Lecture: Keynote 3 & Keynote 4

Thursday

09:00-09:30

Fast-Forward

09:30-10:30

Keynote 5

11:00-12:30

14:00-15:30

16:00-17:30

19:00-22:00

IPC Dinner

Friday

09:00-09:30

Fast-Forward

09:30-10:30

Keynote 6

  • Bernd Bickel Design in the Age of AI and Spatial Computing

11:00-12:30

12:30-12:45

Best Paper Jury

12:45-13:45

Closing Session

Full Program

Tuesday · 11:00-12:30

Full Paper 1

Animating Humans with Gestures and Style

  1. Conversational Gesture Model (CGM): Extending Speaker-Centric Audio-Driven Motion Generation to Full Conversation Gestures

    Adi Rosenthal, Doron Friedman, Tomer Koren, Ariel Shamir

  2. Skeletal-Driven Animation of Anatomical Humans via Neural Deformation Gradients

    Gerrit Nolte, Fabian Kemper, Ulrich Schwanecke, Mario Botsch

  3. Dance Like a Chicken: Low-Rank Stylization for Human Motion Diffusion

    Jian Wang, Bing Zhou, Haim Sawdayee, Chuan Guo, Guy Tevet, Amit Haim Bermano

  4. SkinCells: Sparse Skinning using Voronoi Cells

    Doug Roble, Ladislav Kavan, Ryan Goldade, Igor Santesteban, Hsiao-yu Chen, Gene Lin, Egor Larionov, Philipp Herholz, Tuur Stuyck

  5. VQ-Style: Disentangling Style and Content in Motion with Residual Quantized Representations

    Dhruv Agrawal, Fatemeh Zargarbashi, Stelian Coros, Jakob Buhmann, Robert W. Sumner, Martin Guay

Tuesday · 11:00-12:30

Full Paper 2

Diffusion and Beyond: Controlled Image Generation and Stylization

  1. Graph-based Black and White Stylization

    Jimmy Lord, Ali Sattari Javid, David Mould

  2. Palette Aligned Image Diffusion

    Noy Porat, Elad Aharoni, Ariel Shamir, Dani Lischinski

  3. Latent Diffusion-GAN: Adversarial Learning in the Autoencoded Latent Space

    Jaeeun Ko, U Chae Jun, Jiwoo Kang

  4. Edge-preserving noise for diffusion models

    Jente Vandersanden, Sascha Holl, Xingchang Huang, Gurprit Singh

  5. TextFlux: An OCR-Free DiT Model for High-Fidelity Multilingual Scene Text Synthesis

    Gao Longwen, Weihang Wang, Peiyi Li, Pengyu Chen, Jielei Zhang, qian qiao, Yu Xie, Zhouhui Lian

Tuesday · 11:00-12:30

Full Paper 3

Structured for Speed: Spatial Representations for Real-Time Rendering

  1. Real-Time Rendering of Dynamic Line Sets using Voxel Ray Tracing

    Maxime Chamberland, Bram Kraaijeveld, Anna Vilanova, Andrei Jalba

  2. EBOAT: Error-Bounded Adaptive Tessellation of Singularities for Real-Time Catmull-Clark Subdivision Surfaces Rendering

    Yajun Zeng, Cong Chen, Ruicheng Xiong, Yang Lu, Ligang Liu

  3. Encoding Occupancy in Memory Location for Efficient and Compact High-Resolution Voxel Structures

    Jaina Modisett, Markus Billeter

  4. NePO: Neural Point Octrees for Large-scale Novel View Synthesis

    Noah Lewis, Darius Rückert, Marc Stamminger, Linus Franke

  5. NAADF: Globally Illuminated Voxel Worlds Accelerated with Nested Axis-Aligned Distance Fields

    Annalena Ulschmid, Marvin Ott, Jonas Macho, Michael Wimmer, Stefan Ohrhallinger

Tuesday · 14:00-15:30

Full Paper 4

Covering the Surface: Texture Synthesis, Patterns, and Compression

  1. Real-time by-example texture synthesis and filtering using local statistics exchange

    Guillaume Gilet, Nicolas Lutz

  2. Variable-Rate Texture Compression: Real-Time Rendering with JPEG

    Elias Kristmann, Michael Wimmer, Markus Schütz

  3. ProcTex: Consistent and Interactive Text-to-texture Synthesis for Part-based Procedural Models

    Zihan Zhu, Ruiqi Xu, Benjamin Ahlbrand, Srinath Sridhar, Daniel Ritchie

  4. Lightmap Compression with Color-Coherent UV Clustering and Cascade Texture Optimization

    Heng Cai, Hongyu Huang, Yuqing Zhang, Yuzhe Luo, Chao Li, Dehan Chen, Hao Xu, Sipeng Yang, Xifeng Gao, Xiaogang Jin

  5. Controllable Intrinsic Surface Pattern Generation Using Slime Mold Simulations

    Jeffrey Layton, Adam Runions, Faramarz Samavati

Tuesday · 14:00-15:30

Full Paper 5

Learning Surface and Scene Representations

  1. Mesh Processing Non-Meshes via Neural Displacement Fields

    Zhecheng Wang, Yuta Noma, Alec Jacobson, Chenxi Liu, Karan Singh

  2. Basis Networks: Learning basis functions for free-form triangulations

    Tobias Djuren, Marc Alexa

  3. Self-supervised Learning of Fine-to-Coarse Cuboid Shape Abstraction

    Gregor Kobsik, Morten Henkel, Yanjiang He, Victor Czech, Tim Elsner, Isaak Lim, Leif Kobbelt

  4. TLC-Plan: A Two-Level Codebook Based Network for End-to-End Vector Floorplan Generation

    Qiegen Liu, Xian Zhong, Zhen Peng, Wang Ping, Biao Xiong

  5. Floorplan Generation by Alternating Geometry and Semantics Optimization

    Sizhe Hu, Wenming Wu, Liping Zheng, Xiao-Ming Fu, Ligang Liu

Tuesday · 16:00-17:30

Full Paper 6

Go with the Flow: Fluid Simulation and Rendering

  1. Adaptive Optical Layers: Efficient Tall Cell Grids for Liquid Simulation

    Fumiya Narita, Takashi Kanai

  2. A Semi-Analytical Energy Model for Particle-Based Fluid Simulation Involving Complex Moving Boundaries

    Yin Li, Ruikai Liang, Junyuan Liu, Yuzhong Guo, Shusen Liu, Xiaowei He

  3. Dripping Thin Films for Real-time Digital Painting

    Zoé Herson, Axel Paris, Élie Michel

  4. Fluid Composer: Fluid Detail Composition and Rendering Using Video Diffusion Models

    Duowen Chen, Zhiqiang Lao, Yu Guo, Heather Yu

  5. A Particle-Based Approach to Extract Dynamic 3D FTLE Ridge Geometry

    Daniel Stelter, Thomas Wilde, Christian Rössl, Holger Theisel

Tuesday · 16:00-17:30

Full Paper 7

Structural Geometry: From Fabrication to Fracture

  1. Field-Aligned Surface-Filling Curve via Implicit Stitching

    Giovanni Cocco, Xavier Chermain

  2. Strain-Field Based Segmentation for Fabric Formwork

    Jeff Tedi, Tiffany Bao, Abhinit Sati, Edward Chien, Emily Whiting

  3. Designing inflatable shells using unstructured meshes

    Siyuan He, Arthur Lebée, Melina Skouras

  4. DeepFracture: A Generative Approach for Predicting Brittle Fractures with Neural Discrete Representation Learning

    Yuhang Huang, Takashi Kanai

Wednesday · 11:00-12:30

Full Paper 8

From Pixels to Scenes: 3D Reconstruction and Generation

  1. ZeroScene: A Zero-Shot Framework for 3D Scene Generation from a Single Image and Controllable Texture Editing

    Xiaopeng Fan, Xiang Tang, Ruotong Li

  2. GS-2M: Material-aware Gaussian Splatting for High-fidelity Mesh Reconstruction

    Malte Avenhaus, Dinh Minh Nguyen, Thomas Lindemeier

  3. Layer3D: A Layered 3D Representation for Multiview Vector Graphics

    Yixin Hu, Zhongyue Guan, Zeyu Wang

  4. GeoFusion-LRM: Geometry-Aware Iterative Conditioning for Consistent 3D Reconstruction

    Ahmet Burak Yildirim, Tuna Saygin, Aysegul Dundar, Duygu Ceylan

  5. UniCross3D: Unified Cross-View and Cross-Domain Diffusion for Consistent Single-Image 3D Generation

    Jaeeun Ko, U Chae Jun, Jiwoo Kang

Wednesday · 11:00-12:30

Full Paper 9

Motion in the Wild: From Individuals to Crowds

  1. Physics-Based Motion Tracking of Contact-Rich Interacting Characters

    Qianhui Men, Xiaotang Zhang, Ziyi Chang, Hubert P. H. Shum

  2. Step2Motion: Locomotion Reconstruction from Pressure Sensing Insoles

    Jose Luis Ponton, Eduardo Alvarado, Lin Geng Foo, Nuria Pelechano, Carlos Andujar, Marc Habermann

  3. ContactVision: Learning Foot Contact from Video for Physically Plausible Gait Animation

    gyuseok yi, DaeYong Kim, Ri Yu

  4. Herds from Video: Learning a Microscopic Herd Model from Macroscopic Motion Data

    Xianjin Gong, James Gain, Damien Rohmer, Sixtine Lyonnet, Julien Pettré, Marie-Paule Cani

  5. MPACT: Mesoscopic Profiling and Abstraction of Crowd Trajectories

    Marilena Lemonari, Andreas Panayiotou, Theodoros Kyriakou, Nuria Pelechano, Yiorgos Chrysanthou, Andreas Aristidou, Panayiotis Charalambous

Wednesday · 11:00-12:30

Full Paper 10

Light Transport: Sampling, Waves, and Denoising

  1. Wave Tracing: Generalizing The Path Integral To Wave Optics

    Shlomi Steinberg, Matt Pharr

  2. Gradient-domain ReSTIR Path Tracing

    Yu-Chen Wang, Daqi Lin, Markus Kettunen, Lifan Wu, Chris Wyman, Shuang Zhao

  3. Statistical denoising of transient rendering

    Oscar Pueyo-Ciutad, Álvaro López, Diego Gutierrez

  4. Stochastic Pairwise MIS for Fast Many-Candidate Resampling

    Trevor Hedstrom, Daqi Lin, Tzu-Mao Li, Markus Kettunen, Chris Wyman

  5. Deep Residual Combiner: A Learned Fusion of Spatial, Temporal, and Multiscale Correlated Pixel Estimates

    Euan Hughes, WEIJIE ZHOU, Toshiya Hachisuka

Wednesday · 14:00-15:30

Full Paper 11

Hierarchical Geometry: Optimization and Simplification

  1. Convex Primitive Decomposition for Collision Detection

    Julian Knodt, Xifeng Gao

  2. Construction of clustered HLOD with As-Simplified-As-Possible boundaries

    Marc Trabucato, Alexis Vaisse, Mathieu Ladeuil, Noura Faraj

  3. Hierarchical Optimization of the As-Rigid-As-Possible Energy

    Bernd Bickel, Hendrik Meyer, Marc Alexa

  4. Layout Embedding Optimization via Distortion Minimization

    Alexandra Heuschling, Isaak Lim, Leif Kobbelt

  5. Contouring Signed Distance Fields by Approximating Gradients

    Maximilian Kohlbrenner, Marc Alexa

Wednesday · 14:00-15:30

Full Paper 12

Temporal Vision: Video Generation, Pose, and Narrative

  1. Story2Board: A Training‑Free Approach for Expressive Storyboard Generation

    Dvir Samuel, David Dinkevich, Matan Levy, Omri Avrahami, Dani Lischinski

  2. SAGE: Structure-Aware Generative Video Transitions between Diverse Clips

    Mia Kan, Yilin Liu, Niloy Mitra

  3. MultiCOIN: Multi-Modal COntrollable INbetweening

    Simon Niklaus, Krishna Kumar Singh, Maham Tanveer, Yang Zhou, NANXUAN ZHAO, Ali Mahdavi Amiri, Hao (Richard) Zhang

  4. See4D: Pose-Free 4D Generation via Auto-Regressive Video Inpainting

    Wei Yin, Liang Pan, Lingdong Kong, Baorui Ma, Yuyang Zhao, Xiao Fu, Alan Liang, Dongyue Lu, Tianxin Huang, Ziwei Liu, Wei Tsang Ooi

  5. Enhancing Robust Category-Agnostic Pose Estimation through Multi-Modal Feature Alignment

    Juan Liu, Boxuan Li

Wednesday · 16:00-17:30

Full Paper 13

2D and Beyond: Stylized Animation and Reconstruction

  1. 3D Character Reconstruction from Hand-drawn Model Sheets

    Yoonha Hwang, Hyejeong Yoon, Wonjong Jang, Seungyong Lee

  2. Generative Cutout Animation

    Ivan Puhachov, Noam Aigerman, Thibault Groueix, Mikhail Bessmeltsev

  3. Mixed Super-Circles

    Emile Hohnadel, Thibaut Métivet, Florence Bertails-Descoubes

  4. Vector sketch animation generation with differentialable motion trajectories

    Shuyang Zheng, Zhexin Zhang, Jing Huang, Fei Gao, Xinye Yang, Xinding Zhu, Jiazhou Chen

Wednesday · 16:00-17:30

Full Paper 14

Solving Deformation: Numerical Methods for Elastic Simulation

  1. STAGED: Stress-Tensor Assisted Global–local-global solver for interactive Elastic shape Design

    Liangwang Ruan, baoquan chen, Tiantian Liu, Bin Wang

  2. Interpolated Adaptive Linear Reduced Order Modeling for Deformation Dynamics

    Pablo Fernandez, Maurizio Chiaramonte, Yutian Tao

  3. Progressively Projected Newton’s Method

    Fabian Löschner, José Antonio Fernández-Fernández, Jan Bender

  4. Affinification: A Fine Approximation of Deformations

    Alexandre Mercier-Aubin, Sheldon Andrews, Teseo Schneider, Paul Kry

Thursday · 11:00-12:30

Full Paper 15

Digital Humans: From Capture to Control

  1. DexterCap: Affordable and Automated Capture of Complex Hand-Object Interactions

    he zhang, Bowen Zhan, Yulong Zhang, Shiyi Xu, Yutong Liang, Libin Liu

  2. Improving Facial Rig Semantics for Tracking and Retargeting

    Allise Thurman, Dalton Omens, Ron Fedkiw, Jihun Yu

  3. CANRIG: Cross-Attention Neural Face Rigging with Variable Local Control

    Arad Mohammadi, Loïc Ciccone, Jakob Buhmann, Sebastian Weiss, Martin Guay, Derek Bradley, Robert Sumner

  4. GTAvatar: Bridging Gaussian Splatting and Texture Mapping for Relightable and Editable Gaussian Avatars

    Francois Bourel, Mae Younes, Kelian Baert, Adnane boukhayma, Marc Christie

  5. Neuralocks: Real-Time Dynamic Neural Hair Simulation

    Doug Roble, Hsiao-yu Chen, Egor Larionov, Tuur Stuyck, Gene Wei-Chin Lin

Thursday · 11:00-12:30

Full Paper 16

Measuring and Modeling Material Appearance

  1. High-Gloss SVBRDF Capture Using Bounce Light

    Tomáš Iser, Andrei-Timotei Ardelean, Tim Weyrich

  2. A Texture-Free Multi-Scale Model for Surface-Based Rendering of Knitted Fabrics

    Apoorv Khattar, Jean-Marie Aubry, Zahra Montazeri, Lingqi Yan

  3. A Discrete Polydisperse Porous BSDF Model based on the Micrograin Framework

    Kewei XU, Simon LUCAS, Benjamin Bringier, Mickael Ribardiere, Pascal Barla

  4. HiMat: DiT-based Ultra-High Resolution SVBRDF Generation

    Jian Yang, Zixiong Wang, Yiwei Hu, Milos Hasan, Beibei Wang

  5. Digitisation of Impasto and Gloss in Oil Paintings via Spatially Varying Bidirectional Reflectance Distribution Function Acquisition

    Chih Yang, Tzung-Han Lin

Thursday · 14:00-15:30

Full Paper 17

From Leaf to Planet: Natural Environment Generation and Simulation

  1. LeafFit: Plant Assets Generation from 3D Gaussian Splatting

    CHANG LUO, Nobuyuki Umetani

  2. TreeON: Reconstructing 3D Tree Point Clouds from Orthophotos and Heightmaps

    Johannes Eschner, Angeliki Grammatikaki, Manuela Waldner, Pedro Hermosilla, Oscar Argudo

  3. HeatMat: Simulation of City Material Impact on Urban Heat Island Effect

    Elisabeth Brunet, Catalin Fetita, Mikolaj Czerkawski, Peter Naylor, Marie Reinbigler, Nikolaos (ESA), Romain Rouffet, Rosalie MARTIN

  4. Authoring Terrestrial Planets with Diffusion Models

    Oliver Borg, Adrien Peytavie, Guillaume Cordonnier, James Gain, Marie-Paule Cani, Eric Guerin, Eric Galin

  5. Terrain synthesis and authoring based on iso-contours

    Benoit Huftier, Hugo Schott, Oscar Argudo, Adrien Peytavie, Eric Guerin, Eric Galin

Thursday · 14:00-15:30

Full Paper 18

Neural Appearance: Reflectance, Irradiance, and Light Transport

  1. Neural Progressive Photon Mapping

    Justin Benoist, Joey Litalien, Adrien Gruson

  2. Neural Local Inter-reflection Modeling for Garment Fold Rendering

    Gyoonseo Kim, Nuri Ryu, Jooeun Son, Joo Ho Lee, Seungyong Lee

  3. Real-time Rendering with a Neural Irradiance Volume

    Arno Coomans, Giacomo Nazzaro, Edoardo Alberto Dominici, Christian Döring, Floor Verhoeven, Konstantinos Vardis, Markus Steinberger

  4. A Real-Time Multi-Scale Neural Representation for Complex Surface Reflectance

    Heikki Timonen, Pauli Kemppinen, Jaakko Lehtinen

  5. BRDF Importance Baking: A Lightweight Neural Solution to Importance Sampling General Parametric BRDFs

    Yaoyi Bai, Songyin Wu, Zheng Zeng, Lingqi Yan, Beibei Wang

Thursday · 16:00-17:30

Full Paper 19

Parametric and Structured Geometry

  1. CADrawer : Autoregressive CAD Generation from 3D Sketches

    Henro Kriel, Chengye Hao, Yuanbo Li, Gilda Manfredi, Xianghao Xu, Daniel Ritchie, Adrien Bousseau

  2. Differentiable variable fonts

    Kinjal Parikh, David Levin, Alec Jacobson, Danny Kaufman

  3. 2D Piecewise Linear Scalar Fields with Invertible Integral Lines

    Timm Leon Erxleben, Michael Motejat, Christian Roessl, Holger Theisel

  4. Register-Efficient Linear-Time Evaluation in the Bernstein Basis

    Anna Lili Horváth, Gábor Valasek

  5. Improving the watertightness of parametric surface/surface intersection

    Yuqing Wang, Xiaohong Jia, Jieyin Yang, Bolun Wang, Pengbo Bo, Yang Liu

Thursday · 16:00-17:30

Full Paper 20

Immersive and Interactive: Rendering Across Displays and Devices

  1. Robo-Saber: Generating and Simulating Virtual Reality Players

    May Liu, Jason Peng, Nam Hee Kim, Perttu Hämäläinen, Jaakko Lehtinen, James O'Brien

  2. Real-Time Neural Materials on Mobile VR

    Anton Michels, Yehonathan Litman, Yang Zhou, Zilin Xu, Matt Jen-Yuan Chiang, Lingqi Yan

  3. ML-PEA: Machine Learning-Based Perceptual Algorithms for Display Power Optimization

    ajit ninan, Nathan Matsuda, Thomas Wan, Kenneth Chen, Qi Sun, Alexandre Chapiro

  4. ProjectiveShading: Inserting 3D Objects into Indoor Images with Complex Shadows

    Wenbin Li, Jundan Luo, Xiaolong Wu, NANXUAN ZHAO, Lu Wang, Christian Richardt

  5. PBR-Inspired Controllable Diffusion for Image Generation

    Bowen Xue, Zahra Montazeri, Giuseppe Claudio Guarnera, Shuang Zhao

Friday · 11:00-12:30

Full Paper 21

Maps and Meshes: Parameterization and Geometry Processing

  1. Adaptive Use of LBO Bases by Shape Feature Scales for High-Quality and Efficient Shape Correspondence

    Chong Zhao, Wencheng Wang, Fei Hou

  2. TABI: Tight and Balanced Interactive Atlas Packing

    Floria Gu, Nicholas Vining, Alla Sheffer

  3. Volume Quantization with Flexible Singularities for Hexahedral Meshing

    Hendrik Brückler, Marcel Campen

  4. Fast Injective Mesh Parameterization via Beltrami Coefficient Prolongation

    Guy Fargion, Ofir Weber

  5. DiskScissors: Cutting Arbitrary-Topology Solids for Bijective Mapping

    Steffen Hinderink, Marcel Campen

Friday · 11:00-12:30

Full Paper 22

Advancing 3D Gaussian Splatting

  1. Multi-Spectral Gaussian Splatting with Neural Color Representation

    Maximilian Weiherer, Josef Grün, Lukas Meyer, Bernhard Egger, Linus Franke, Marc Stamminger

  2. RotGS: Rotation-Guided 3D Gaussian Splatting for Turntable Sequences without Structure-from-Motion

    Kyumin KIM, Dohae Lee, Hanul Baek, In-Kwon Lee

  3. Adaptive Spatio-Temporal 3D Gaussian Splatting for Scenes with Oscillatory Motion

    Jeffrey Hu, Andréas Meuleman, Petros Tzathas, Guillaume Cordonnier, George Drettakis

  4. OUGS: Active View Selection via Object-aware Uncertainty Estimation in 3DGS

    Haiyi Li, Qi Chen, Denis Kalkofen, Hsiang-Ting Chen

  5. Splat-based Metal Artifact Reduction in Cone-Beam CT via Polychromatic Modeling

    Hyeongjun Cho, Kiseok Choi, Jaemin Cho, Inchul Kim, Min H. Kim

Wednesday · 11:00-12:30

Short Paper 1

Thursday · 11:00-12:30

Short Paper 2

Friday · 11:00-12:30

Short Paper 3

Friday · 11:00-12:30

Short Paper 4

Monday · 09:00-10:30 · 10:45-12:15

Tutorial 1

Simulation Methods for Multiphysics Phenomena in Visual Computing

Fabian Löschner, Stefan Rhys Jeske, José Antonio Fernández-Fernández, and Jan Bender

Monday · 09:00-10:30 · 10:45-12:15

Tutorial 2

A Hands-On Introduction to Discrete Differential Operators on Polygon Meshes

Sven Dominik Wagner, Astrid Bunge, and Mario Botsch

Monday · 13:15-14:45 · 15:00-16:30

Tutorial 3

Deep Learning on Meshes and Point Clouds

Ruben Wiersma

Monday · 13:15-14:45 · 15:00-16:30

Tutorial 4

Optimal Transport for Fluid Simulation New and Old

Cyprien Plateau--Holleville and Bruno Lévy

Tuesday · 14:00-15:30 · 16:00-17:30

Tutorial 5

Fast Explicit 3D Reconstructions and How To Use Them

Bernhard Kerbl, Markus Steinberger, Linus Franke, Florian Hahlbohm, and Andrea Tagliasacchi

Tuesday · 14:00-15:30 · 16:00-17:30

Tutorial 6

Effective User Studies in Computer Graphics: From Pixels to Perception

Daniel Martin, Martin Weier, Piotr Didyk, Mauricio Flores-Vargas, Ernst Kruijff, Rachel McDonnell, and Sandra Malpica

Wednesday · 14:00-15:30 · 16:00-17:30

Tutorial 7

Convex Optimization in Computer Graphics

Leticia Mattos Da Silva

Thursday · 14:00-15:30 · 16:00-17:30

Tutorial 8

Introduction to Optimization Time Integration for Solids and Fluids

Jiayi Eris Zhang and Minchen Li

Monday · 09:00-10:30

STAR 1

Magnetic Modeling and Simulation for Computer Graphics

Xingyu Ni, Yuechen Zhu, Ruicheng Wang, and Bin Wang

Monday · 09:00-10:30

STAR 2

Advances in Neural 3D Mesh Texturing: A Survey

Sai Raj Kishore Perla, Hao (Richard) Zhang, and Ali Mahdavi-Amiri

Monday · 10:45-12:15

STAR 3

Survey on differential estimators for 3D point clouds

Léo Arnal--Anger, Thibault Lejemble, David Coeurjolly, Loïc Barthe, and Nicolas Mellado

Monday · 10:45-12:15

STAR 4

Establishing Shape Correspondences: A Survey

Alexandra Heuschling, Hannah Meinhold, and Leif Kobbelt

Monday · 13:15-14:45

STAR 5

How to Build Digital Humans?

WojciechZielonka, Tobias Kirschstein, Timo Bolkart, Simon Giebenhain, Vanessa Sklyarova, Xiang Deng, Donglai Xiang, Shunsuke Saito, Yebin liuyebin, Matthias Niessner, and Justus Thies

Monday · 15:00-16:30

STAR 6

Non-Rigid 3D Shape Correspondences: From Foundations to Open Challenges and Opportunities

Aleksei Zhuravlev, Lennart Bastian, Dongliang Cao, Nafie El Amrani, Paul Roetzer, Viktoria Ehm, Riccardo Marin, Hiroki Nishizawa, Shigeo Morishima, Christian Theobalt, Nassir Navab, Daniel Cremers, Florian Bernard, Zorah Lähner, and Vladislav Golyanik

Thursday · 11:00-12:30

Invited STARs - 45 min each

STAR 7

Deep Sketch-Based 3D Modeling: A Survey

https://doi.org/10.1111/cgf.70302

Alberto Tono, Jiajun Wu, Gordon Wetzstein, Iro Armeni, Hariharan Subramonyam, James Landay, and Martin Fischer

STAR 8

A Survey of Inter-Prediction Methods for Time-Varying Mesh Compression

https://doi.org/10.1111/cgf.15278

Jan Dvořák, Filip Hácha, Gerasimos Arvanitis, David Podgorelec, Konstantinos Moustakas, and Libor Váša

Thursday · 14:00-15:30

STAR 9

State-of-the-art in deep learning approaches for single-panorama indoor modeling and exploration

Giovanni Pintore, Marco Agus, Jens Schneider, and Enrico Gobbetti

Monday · 13:15-14:45

Education 1

Monday · 15:00-16:30

Education 2

Tuesday · 11:00-12:30

Education 3

Tuesday · 09:30-10:30

Keynote 1

The Quest for Easy Creation, Editing and Real-Time Rendering of Realistic 3D Scenes

George Drettakis

Wednesday · 09:30-10:30

Keynote 2

Learning to See the 3D World

Lourdes De Agapito Vicente

Wednesday · 19:30-21:00

Keynote 3

Björn Ommer

Keynote 4

Shaping the future of our 3D immersion in digital worlds

Anatole Lécuyer

Thursday · 09:30-10:30

Keynote 5

Graphics' Final Frontier

Jaakko Lehtinen

Friday · 09:30-10:30

Keynote 6

Design in the Age of AI and Spatial Computing

Bernd Bickel

Wednesday · 15:30-16:00

Poster Q & A

Keynotes

George Drettakis
Inria Université Côte d’Azur

The Quest for Easy Creation, Editing and Real-Time Rendering of Realistic 3D Scenes

In this talk we will present over 25 years of research motivated by the goal of providing solutions to easily create realistic 3D scenes by capturing real content, allowing subsequent editing -- most importantly re-lighting – and allowing real-time rendering of the resulting scenes. We look back at several early projects, and how they allowed us to advance our understanding of the fundamental difficulties of developing algorithms to achieve our goals by building on physics-based rendering and traditional graphics solutions. We will then stress the importance of being open to new tools and methodologies, most importantly deep learning. We will illustrate how adopting such techniques and methodologies early provided a significant advantage, both in relighting and real-time rendering for novel view synthesis, in part by building on our expertise in realistic rendering for training data generation. We will discuss the importance of efficiency and optimization even in early stages of these research projects, and finally discuss how the power of recent generative models provides exciting new possibilities, opening the way to powerful solutions to our overarching goals of easily creating, editing and rendering realistic 3D content.

George Drettakis graduated in Computer Science (CS) in Crete, Greece, obtained an M.Sc.and a Ph.D., (1994) in CS at the University of Toronto, Canada, under the supervision of Eugene Fiume, followed by an ERCIM postdoc in Grenoble, Barcelona and Bonn (94-95). He obtained an Inria researcher position in the iMAGIS group in Grenoble (1995), and the degree of "Habilitation" at the University of Grenoble (1999). In 2000 he founded the REVES research group at INRIA Sophia-Antipolis (2002-2015), followed by the current GRAPHDECO group. He has received several awards: the Eurographics (EG) Outstanding Technical Contributions award in 2007, EG Distinguished Career Award (2024), Inria-French Academy of Sciences Grand Prix (2024), the ACM SIGGRAPH Computer Graphics Achievement Award (2025), and was named EG (2007) and ACM Fellow (2026). He was papers co-chair of the EG Rendering Workshop in 1998, EG conference in 2002 and 2008, technical papers chair of SIGGRAPH Asia 2010, associate editor for major graphics journals, and chairs the EG working group on Rendering. His research spans many topics in computer graphics, with an emphasis on rendering. He initially concentrated on lighting and shadow computation and subsequently worked on 3D audio, perceptually-driven algorithms, virtual reality and 3D interaction. In recent years he has focused more on learning-based appearance capture, relighting and novel view synthesis (previously known as image-based rendering), culminating in the development of 3D Gaussian Splatting.

Jaakko Lehtinen
Aalto University / NVIDIA Research

Graphics' Final Frontier

Computer graphics has undergone an incredible journey from its (visually) humble beginnings into our current ability to simulate the appearance and motion of complex scenes to a degree often difficult to distinguish from reality. Yet closing the final gap to the look and feel of live action footage remains elusive. At the same time, modern purely data-driven methods routinely surpass the realism of traditional first-principles graphics approaches, but come with only coarse controls.In this talk, I'll draw on my experience of working with both classic and data-driven image generation techniques and attempt to outline a vision for the "endgame" of computer graphics that synthesizes the classic first-principles approaches with the power of data.

Jaakko is an associate professor at Aalto University and a distinguished research scientist at NVIDIA Research in Helsinki, Finland. He works on computer graphics and machine learning, with particular interests in generative modelling, realistic image synthesis, and appearance acquisition and reproduction. Overall, he's fascinated by the combination of machine learning techniques with physical simulators in the search for robust, interpretable AI. Prior to his current positions, Jaakko spent 2007-10 as a postdoc with Frédo Durand at MIT. Before his research career, he worked for the game developer Remedy Entertainment in 1996-2005 as a graphics programmer, and contributed significantly to the graphics technology behind the worldwide blockbuster hit games Max Payne (2001), Max Payne 2 (2003), and Alan Wake (2009).

Lourdes De Agapito Vicente
University College London / Synthesia Technologies

Learning to See the 3D World

Building algorithms that can emulate human 3D perception, using as input single images or video sequences taken with a consumer camera, proved to be a challenging task for years but has recently seen astounding progress. For decades, machine learning solutions faced the challenge of scarcity of 3D annotations, encouraging important advances in weak and self-supervision. However, recent efforts in large-scale paired image-3D dataset collection have led to a paradigm shift and fully supervised feed-forward large 3D reconstruction models have become a reality. In this talk I will describe progress in both static and dynamic 3D reconstruction, from early optimization-based solutions that captured sequence-specific 3D models, towards more powerful 3D-aware neural representations that can be trained from 2D image supervision only, to today’s large transformer-based, multi-view feed-forward models for metric-scale dense 3D reconstruction. I will also describe the successful commercial uptake of this technology and will show its application to AI-driven video synthesis.

Lourdes holds the position of Professor of 3D Vision at the Department of Computer Science, University College London (UCL) where she heads the Vision and Imaging Science Group. She received her BSc, MSc and PhD degrees from Universidad Complutense de Madrid (Spain). In 1997 she joined the Robotics Research Group at the University of Oxford as an EU Marie Curie Fellow. In 2001 she was appointed Lecturer at Queen Mary University of London, where she held an ERC Grant. Lourdes joined UCL in 2013 and was promoted to full professor in 2015. Her research in computer vision has consistently focused on the inference of 3D information from images or videos acquired with a single camera. Lourdes has served as Program Chair for CVPR 2016 and ICCV 2023, serves regularly as Area Chair for the top Computer Vision conferences (CVPR, ICCV, ECCV) and was Keynote speaker at ICRA 2017, ICLR 2021 and ECCV'24. Lourdes is co-founder of London-based startup Synthesia, the world’s largest AI video generation platform for business, currently valued at $4B. Synthesia's text-to-video technology allows users to create professional videos directly on the browser, removing the physical constraints of conventional production.

Bernd Bickel
ETH Zurich

Design in the Age of AI and Spatial Computing

As the boundaries between the digital and physical worlds blur, we face a profound opportunity to reimagine how we design the world around us. While advanced manufacturing, artificial intelligence, and spatial computing offer unprecedented potential for architecture, engineering, and art, their impact is often limited by a lack of design tools that can seamlessly bridge human creativity with physical realizability. In this talk, I will explore the transformation of design workflows from traditional CAD tools toward intelligent design systems. I will discuss how optimization-based design and tailored data-driven models enable novel approaches for interactive shape exploration and beyond, demonstrating their applicability to challenges ranging from intricate microstructures to high-performance building facades. A central theme is the control problem: the inherent tension between the probabilistic nature of modern generative AI and the high precision and editability required for professional engineering. I will conclude by reflecting on the evolving role of algorithms as creative partners. I will share a vision for a future where technology provides the "digital superpowers" that complement rather than replace human intuition, enabling us to build a more sustainable, functional, and resilient world.

Bernd Bickel is a Full Professor of Computational Design at ETH Zurich and a Research Scientist at Google. He previously served as a Professor and Vice President at ISTA and worked as a Research Scientist at Disney Research. He received his PhD in Computer Science from ETH Zurich in 2010. His research intersects visual computing, digital fabrication, and machine learning, focusing on computational tools that bridge digital design and physical manufacturing. His work includes high-fidelity performance capture, data-driven material modeling, functional metamaterials, and creative AI & generative design, integrating physics-based simulation with machine learning to create high-performance structures and systems. Bernd’s contributions have been recognized with a Technical Achievement Award from the Academy of Motion Picture Arts and Sciences (2019), the ACM SIGGRAPH Significant New Researcher Award (2017), an ERC Starting Grant (2016), and the ETH Medal (2011) for his doctoral dissertation.

Anatole Lécuyer
Inria Rennes/IRISA

Shaping the future of our 3D immersion in digital worlds

Virtual reality (VR) naturally evokes a set of advanced technologies designed to immerse users in synthetic 3D worlds simulated in real-time by a computer. Through dedicated interfaces such as head‑mounted displays, VR applications enable powerful experiences, transporting users to imaginary places or allowing them to interact with virtual characters and remote people. The first VR systems date back to the 1960s, but today we are living through a pivotal moment for the field, as it steadily moves toward widespread, mass‑market adoption. In this talk, we will explore the next steps for VR technologies. We will first argue that VR is progressively introducing greater physical engagement into 3D human-computer interaction, for example through haptic technologies (tactile or force feedback) or through virtual embodiment via self‑avatars (anthropomorphic representations of the user within a virtual environment). We will also examine the ongoing convergence of VR with physiological and neural interfaces, pointing toward future interactive systems that directly leverage users’ cognitive states and open the door to even more compelling and holistic experiences. The talk will be illustrated with some of our latest scientific results, offering a glimpse of what could become.. the future of our 3D immersion in digital worlds.

Anatole Lécuyer is Director of Research, at Inria, the French National Institute for Research in Digital Science and Technology, based in Rennes. For more than 20 years, he has been conducting research in the field of virtual reality, exploring new ways of interacting with virtual worlds, such as haptic or neural interfaces. He is the co‑author of over 250 scientific publications and 15 patents. He serves as an expert for numerous organizations, including the French National Research Agency and the European Commission. He served as Associate Editor of IEEE Transactions on Visualization and Computer Graphics, and Presence journal. He was General Chair of the IEEE Virtual Reality Conference (2025), Program Chair of IEEE Virtual Reality Conference (2015-2016) and General Chair of IEEE Symposium on Mixed and Augmented Reality (2017). Anatole Lécuyer received the Inria–Académie des Sciences Young Researcher in Digital Science Award in 2013, the IEEE VGTC Technical Achievement Award in Virtual/Augmented Reality in 2019, and was inducted into the IEEE Virtual Reality Academy in 2022.

Björn Ommer
Ludwig Maximilian University of Munich

(stay tuned for talk information)

Björn Ommer is a full professor for Computer Science at LMU Munich where he leads the Computer Vision & Learning Group. Previously he was a full professor at Heidelberg University. After studying computer science and physics at the University of Bonn, he earned a Ph.D. from ETH Zurich, and held a postdoc position at UC Berkeley. He is LMU's Chief AI Officer, a director of the Bavarian AI Council, an ELLIS Fellow, and has served as an editor for IEEE T-PAMI and on the boards of numerous CVPR, ICCV, ECCV, and NeurIPS conferences. Björn's research interests are in generative AI, visual understanding, and explainable neural networks. His group developed several influential approaches in generative modeling, such as Stable Diffusion, which have seen broad adoption across academia, industry, and beyond and reflect his broader goal of advancing the democratization of generative AI.