ICCV-23 paper list
目录
Oral Papers
3D from multi-view and sensors
Generative AI
Poster Papers
3D Generation (Neural generative models)
3D from a single image and shape-from-x
3D Editing
Face and gestures
Stylization
Dataset
Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields
LERF: Language Embedded Radiance Fields
Mixed Neural Voxels for Fast Multi-view Video Synthesis
Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a Light-Weight ToF Sensor
Diffusion-Guided Reconstruction of Everyday Hand-Object Interaction Clips
Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions
Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction
ScanNet++: A High-Fidelity Dataset of 3D Indoor Scenes
EgoLoc: Revisiting 3D Object Localization from Egocentric Videos with Visual Queries
TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion Models
Generative Novel View Synthesis with 3D-Aware Diffusion Models
VQ3D: Learning a 3D-Aware Generative Model on ImageNet
GRAM-HD: 3D-Consistent Image Generation at High Resolution with Generative Radiance Manifolds
Generative Multiplane Neural Radiance for 3D-Aware Image Generation
Get3DHuman: Lifting StyleGAN-Human into a 3D Generative Model Using Pixel-Aligned Reconstruction Priors
Towards High-Fidelity Text-Guided 3D Face Generation and Manipulation Using only Images
ATT3D: Amortized Text-to-3D Object Synthesis
Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation
GETAvatar: Generative Textured Meshes for Animatable Human Avatars
Mimic3D: Thriving 3D-Aware GANs via 3D-to-2D Imitation
DreamBooth3D: Subject-Driven Text-to-3D Generation
3D-aware Image Generation using 2D Diffusion Models
Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction
Accurate 3D Face Reconstruction with Facial Component Tokens
HiFace: High-Fidelity 3D Face Reconstruction by Learning Static and Dynamic Details
Zero-1-to-3: Zero-shot One Image to 3D Object
Deformable Model-Driven Neural Rendering for High-Fidelity 3D Reconstruction of Human Heads Under Low-View Settings
Vox-E: Text-Guided Voxel Editing of 3D Objects
FaceCLIPNeRF: Text-driven 3D Face Manipulation using Deformable Neural Radiance Fields
SKED: Sketch-guided Text-based 3D Editing
Seal-3D: Interactive Pixel-Level Editing for Neural Radiance Fields
Speech4Mesh: Speech-Assisted Monocular 3D Facial Reconstruction for Speech-Driven 3D Facial Animation
Imitator: Personalized Speech-driven 3D Facial Animation
EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation
SPACE: Speech-driven Portrait Animation with Controllable Expression
Diffusion in Style
Creative Birds: Self-Supervised Single-View 3D Style Transfer
StyleDomain: Efficient and Lightweight Parameterizations of StyleGAN for One-shot and Few-shot Domain Adaptation
StylerDALLE: Language-Guided Style Transfer Using a Vector-Quantized Tokenizer of a Large-Scale Generative Model
X-Mesh: Towards Fast and Accurate Text-driven 3D Stylization via Dynamic Textual Guidance
Locally Stylized Neural Radiance Fields
DS-Fusion: Artistic Typography via Discriminated and Stylized Diffusion
Multi-Directional Subspace Editing in Style-Space
StyleDiffusion: Controllable Disentangled Style Transfer via Diffusion Models
All-to-Key Attention for Arbitrary Style Transfer
DeformToon3D: Deformable Neural Radiance Fields for 3D Toonification
Anti-DreamBooth: Protecting Users from Personalized Text-to-image Synthesis
Neural Collage Transfer: Artistic Reconstruction via Material Manipulation
H3WB: Human3.6M 3D WholeBody Dataset and Benchmark
SynBody: Synthetic Dataset with Layered Human Models for 3D Human Perception and Modeling
Human-centric Scene Understanding for 3D Large-scale Scenario