Streamlined generation pipeline, image-conditioned multi-view diffusion model, transformer-based sparse-view reconstruction model, and texture-back-projection strategy for high-quality texture maps.
Efficiently generates 3D characters from single images using a streamlined pipeline.
Effectively calibrates input poses to a canonical form while retaining key attributes of the input image.
Creates detailed 3D models from multi-view images using a generalizable sparse-view reconstruction model.
Produces high-quality texture maps by back-projecting textures onto the generated 3D model.
A dataset of anime characters, rendered in multiple poses and views, for training and evaluating the CharacterGen model.
Generate 3D characters from single images for animation and rigging applications.
Create detailed 3D models from multi-view images for various industries such as gaming, film, and architecture.
Use the CharacterGen framework to generate 3D characters with high-quality shapes and textures for various applications.
Train and evaluate the CharacterGen model using the curated anime character dataset.
Input a single image of a character into the CharacterGen framework.
Configure the framework's parameters and settings as needed.
Run the CharacterGen pipeline to generate a 3D character from the input image.
Refine the generated 3D character using the texture-back-projection strategy and other post-processing techniques.