task_categories:
- image-to-video
Light-Syn Dataset
This repository contains the Light-Syn dataset, introduced in the paper Light-X: Generative 4D Video Rendering with Camera and Illumination Control.
Project Page: https://lightx-ai.github.io/
Code: https://github.com/TQTQliu/Light-X
Dataset Description
Light-Syn is a degradation-based pipeline with inverse-mapping that synthesizes training pairs from in-the-wild monocular footage. This strategy yields a dataset covering static, dynamic, and AI-generated scenes, ensuring robust training for the Light-X framework, which enables controllable rendering from monocular videos with both viewpoint and illumination control.
Sample Usage
This dataset is used for training the Light-X model. The following steps outline how to prepare the data and start training as described in the associated GitHub repository.
1. Prepare Training Data
Download the dataset.
2. Generate Metadata
Generate the metadata JSON file describing the training samples.
python tools/gen_json.py -r <DATA_PATH>
Then Update the DATASET_META_NAME in your config to the path of the newly generated JSON file.
3. Start Training
Begin the training process. Checkpoints will be saved in the output_train/ directory.
bash train.sh