
git clone https://www.modelscope.cn/lskhh/Lifespan_Age_Transformation_Synthesis.git
Roy Or-El1 ,
Soumyadip Sengupta1,
Ohad Fried2,
Eli Shechtman3,
Ira Kemelmacher-Shlizerman1
1University of Washington, 2Stanford University, 3Adobe Research

Lifespan Age Transformation Synthesis is a GAN based method designed to simulate the continuous aging process from a single input image.
This code is the official PyTorch implementation of the paper:
Lifespan Age Transformation Synthesis
Roy Or-El, Soumyadip Sengupta, Ohad Fried, Eli Shechtman, Ira Kemelmacher-Shlizerman
ECCV 2020
https://arxiv.org/pdf/2003.09764.pdf
We have devoted considerable efforts in our algorithm design to preserve the identity of the person in the input image, and to minimize the influence of the inherent dataset biases on the results. These measures include:
Despite these measures, the network might still introduce other biases that we did not consider when designing the algorithm. If you spot any bias in the results, please reach out to help future research!
You must have a GPU with CUDA support in order to run the code.
This code requires PyTorch and torchvision to be installed, please go to PyTorch.org for installation info.
We tested our code on PyTorch 1.4.0 and torchvision 0.5.0, but the code should run on any PyTorch version above 1.0.0, and any torchvision version above 0.4.0.
The following python packages should also be installed:
If any of these packages are not installed on your computer, you can install them using the supplied requirements.txt file:
pip install -r requirements.txt
You can try running the method on your own images!!!
You can either run the demo localy or explore it in Colab
Running locally:
python download_models.pymales_image_list.txt or females_image_list.txt)./run_scripts/in_the_wild.sh (Linux) or ./run_scripts/in_the_wild.bat (windows).--name flag (males_model or females_model).--image_path_file flag.results/males_model/test_latest/traversal/ or results/females_model/test_latest/traversal/ (according to the selected model).Please refer to Using your own images for guidelines on what images are good to use.
If you get a CUDA out of memory error, slightly increase the --interp_step parameter until it fits your GPU. This parameter controls the number of interpolated frames between every 2 anchor classes. Increasing it will reduce the length of the output video.
For best results, use images according to the following guidelines:
Download the FFHQ-Aging dataset. Go to the FFHQ-Aging dataset repo and follow the instructions to download the data.
Prune & organize the raw FFHQ-Aging dataset into age classes:
cd datasets
python create_dataset.py --folder <path to raw FFHQ-Aging directory> --labels_file <path to raw FFHQ-Aging labels csv file> [--train_split] [num of training images (default=69000)]
python download_models.pyvisdom and monitor results at http://localhost:8097. If you run the code on a remote server open http://hostname:8097 instead.run_scripts/train.sh (Linux) or run_scripts/train.bat (windows) and set:
--gpu_ids as well as the CUDA_VISIBLE_DEVICES environment variable.--dataroot--name--batchSize according to your GPU’s maximum RAM capacity and the number of GPU’s available.--continue_training flag and specify the checkpoint you wish to continue training from in the --which_epoch flag, e.g: --which_epoch 100 or --which_epoch latest../run_scripts/train.sh (Linux) or ./run_scripts/train.bat (windows)run_scripts/test.sh (Linux) or run_scripts/test.bat (windows) and set:
--dataroot--name--which_epoch. This can be either an epoch number e.g. 400 or the latest saved model latest../run_scripts/test.sh (Linux) or ./run_scripts/test.bat (windows)results/<model name>/test_<model_checkpoint>/index.html.txt file with a list of image paths to generate videos for. See examples in males_image_list.txt and females_image_list.txtrun_scripts/traversal.sh (Linux) or run_scripts/traversal.bat (windows) and set:
--dataroot--name--which_epoch. This can be either an epoch number e.g. 400 or the latest saved model latest.--image_path_file./run_scripts/traversal.sh (Linux) or ./run_scripts/traversal.bat (windows)results/<model name>/test_<model_checkpoint>/traversal/This will generate an image of progressions to all anchor classes
.txt file with a list of image paths to generate videos for. See examples in males_image_list.txt and females_image_list.txtrun_scripts/deploy.sh (Linux) or run_scripts/deploy.bat (windows) and set:
--dataroot--name--which_epoch. This can be either an epoch number e.g. 400 or the latest saved model latest.--image_path_file./run_scripts/deploy.sh (Linux) or ./run_scripts/deploy.bat (windows)results/<model name>/test_<model_checkpoint>/deploy/If you wish to train the model on a new dataset, arange it in the following structure:
├── dataset_name
│ ├── train<class1>
| | └── image1.png
| | └── image2.png
| | └── ...
│ │ ├── parsings
│ │ │ └── image1.png
│ │ │ └── image2.png
│ │ │ └── ...
...
│ ├── train<classN>
| | └── image1.png
| | └── image2.png
| | └── ...
│ │ ├── parsings
│ │ │ └── image1.png
│ │ │ └── image2.png
│ │ │ └── ...
│ ├── test<class1>
| | └── image1.png
| | └── image2.png
| | └── ...
│ │ ├── parsings
│ │ │ └── image1.png
│ │ │ └── image2.png
│ │ │ └── ...
...
│ ├── test<classN>
| | └── image1.png
| | └── image2.png
| | └── ...
│ │ ├── parsings
│ │ │ └── image1.png
│ │ │ └── image2.png
│ │ │ └── ...
If you use this code for your research, please cite our paper.
@inproceedings{orel2020lifespan,
title={Lifespan Age Transformation Synthesis},
author={Or-El, Roy
and Sengupta, Soumyadip
and Fried, Ohad
and Shechtman, Eli
and Kemelmacher-Shlizerman, Ira},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
year={2020}
}
This code is inspired by pix2pix-HD and style-based-gan-pytorch.