← Back to Cohere videos ← Terug naar Cohere-video's
Cohere VIDEO VIDEO 20 April 2026 20 april 2026
YouTube

Volumetric videos offer immersive 4D experiences, but remain difficult to reconstruct, store, and stream at scale. Existing Gaussian Splatting based methods achieve high-quality re...

Aashish Rai - Video Native Representations for 4D Gaussian Scenes Aashish Rai - Video Native Representations for 4D Gaussian Scenes

Volumetric videos offer immersive 4D experiences, but remain difficult to reconstruct, store, and stream at scale. Existing Gaussian Splatting based methods achieve high-quality reconstruction but break down on long sequences, temporal inco... Volumetric videos offer immersive 4D experiences, but remain difficult to reconstruct, store, and stream at scale. Existing Gaussian Splatting based methods achieve high-quality reconstruction but break down on long sequences, temporal inco...

Video details Videogegevens
AI maker AI-maker Cohere Published Gepubliceerd 20 April 2026 20 april 2026 Channel Kanaal Cohere Playlist Playlist Uploads from Cohere Updates Updates Videos Video's Watch on YouTube Bekijk op YouTube

About this video Over deze video

Volumetric videos offer immersive 4D experiences, but remain difficult to reconstruct, store, and stream at scale. Existing Gaussian Splatting based methods achieve high-quality reconstruction but break down on long sequences, temporal inconsistency, and fail under large motions and disocclusions. Moreover, their outputs are typically incompatible with conventional video coding pipelines, preventing practical applications. Rai et al. introduce PackUV, a novel 4D Gaussian representation that maps all Gaussian attributes into a sequence of structured, multi-scale UV atlas, enabling compact, image-native storage. To fit this representation from multi-view videos, they propose PackUV-GS, a temporally consistent fitting method that directly optimizes Gaussian parameters in the UV domain. An optical flow-guided Gaussian labeling and video keyframing module identifies dynamic Gaussians, stabilizes static regions, and preserves temporal coherence even under large motions and disocclusions. The resulting UV atlas format is the first unified volumetric video representation fully compatible with standard video codecs (e.g., FFV1) without losing quality, enabling efficient streaming within existing multimedia infrastructure. To evaluate long-duration volumetric capture, they present PackUV-2B, the largest multi-view video dataset to date, featuring more than 50-90 synchronized cameras, substantial motion, and frequent disocclusions across 100+ sequences and over 2B (billion) frames. Extensive experiments demonstrate that the method surpasses existing baselines in rendering fidelity while scaling to sequences up to 30 minutes with consistent quality.

Aashish Rai is a Ph.D. student in Computer Science at Brown University, advised by Srinath Sridhar. His research focuses on efficient methods for 3D and 4D novel view synthesis, reconstruction, and generative world modeling. He has also worked at Meta Reality Labs, developing methods that leverage 2D foundation models for 3D asset synthesis. Previously, he was a Research Assistant at Carnegie Mellon University’s Robotics Institute, working with Fernando De la Torre on realistic 3D face generation using 2D models.

This session is brought to you by the Cohere Labs Open Science Community - a space where ML researchers, engineers, linguists, social scientists, and lifelong learners connect and collaborate with each other. We'd like to extend a special thank you to Benedict Emoekabu and Mayank Bhaskar, Leads of our Computer Vision group for their dedication in organizing this event.

If you’re interested in sharing your work, we welcome you to join us! Simply fill out the form at https://forms.gle/ALND9i6KouEEpCnz6 to express your interest in becoming a speaker.

Join the Cohere Labs Open Science Community to see a full list of upcoming events (https://tinyurl.com/CohereLabsCommunityApp).

More videos from Cohere Meer video's van Cohere

All videos Alle video's
Shuo Li Liu - Coherence in RLHF Preference Data
Cohere
24 Apr 2026 24 apr. 2026

Shuo Li Liu - Coherence in RLHF Preference Data Shuo Li Liu - Coherence in RLHF Preference Data

RLHF usually learn from pairwise comparisons, often through Bradley-Terry-style models. I will discuss what coherence requirements, such as Weak Stochastic Transitivity and the Weak Axiom of Revealed Preference, mean for preference trained... RLHF usually learn from pairwise comparisons, often through Bradley-Terry-style models. I will discuss what coherence requirements, such as Weak Stochastic Transitivity and the Weak Axiom of Revealed Preference, mean for preference trained...

Open video → Open video →
Jiafei Duan  - Building Robotics Foundation Model with Reasoning in the loop
Cohere
24 Apr 2026 24 apr. 2026

Jiafei Duan - Building Robotics Foundation Model with Reasoning in the loop Jiafei Duan - Building Robotics Foundation Model with Reasoning in the loop

Scaling alone won’t unlock general-purpose robotics. Integrating reasoning directly into robot learning (spatial, temporal, and failure-based) so robots can learn more from limited data and continuously self-improve is the path forward. Ji... Scaling alone won’t unlock general-purpose robotics. Integrating reasoning directly into robot learning (spatial, temporal, and failure-based) so robots can learn more from limited data and continuously self-improve is the path forward. Ji...

Open video → Open video →
Ekdeep Singh Lubana - From Probes to Rewards  Using Interpretability to Shape Training
Cohere
20 Apr 2026 20 apr. 2026

Ekdeep Singh Lubana - From Probes to Rewards Using Interpretability to Shape Training Ekdeep Singh Lubana - From Probes to Rewards Using Interpretability to Shape Training

Ekdeep Singh Lubana — Guest Speaker @ Cohere Labs AI Safety & Alignment Reading Group Ekdeep is MTS at Goodfire, previously research fellow at Harvard's Center for Brain Science. His recent work addresses some core issues with how we extra... Ekdeep Singh Lubana — Guest Speaker @ Cohere Labs AI Safety & Alignment Reading Group Ekdeep is MTS at Goodfire, previously research fellow at Harvard's Center for Brain Science. His recent work addresses some core issues with how we extra...

Open video → Open video →
Zifeng Liu - Human–AI Collaboration in Educational Assessment  Evaluating AI Generated Distractors
Cohere
13 Apr 2026 13 apr. 2026

Zifeng Liu - Human–AI Collaboration in Educational Assessment Evaluating AI Generated Distractors Zifeng Liu - Human–AI Collaboration in Educational Assessment Evaluating AI Generated Distractors

In this talk, Zifeng will discuss the emerging role of generative AI in educational assessment, with a focus on the automatic generation and evaluation of multiple-choice distractors and feedback in computing and AI education. While large l... In this talk, Zifeng will discuss the emerging role of generative AI in educational assessment, with a focus on the automatic generation and evaluation of multiple-choice distractors and feedback in computing and AI education. While large l...

Open video → Open video →

Gemini komt eraan