Using ISR with Docker
Clone our repository and cd into it:
git clone https://github.com/idealo/image-super-resolution cd image-super-resolution
- Build docker image for local usage
docker build -t isr . -f Dockerfile.cpu
In order to train remotely on AWS EC2 with GPU
Install Docker Machine
Install AWS Command Line Interface
Set up an EC2 instance for training with GPU support. You can follow our nvidia-docker-keras project to get started
Place your images (
data/input/<data name>, the results will be saved under
/data/output/<data name>/<model>/<training setting>.
NOTE: make sure that your images only have 3 layers (the
png format allows for 4).
Check the configuration file
config.yml for more information on parameters and default folders.
-d flag in the run command will tell the program to load the weights specified in
config.yml. It is possible though to iteratively select any option from the command line.
Download the pre-trained weights as described here.
config.yml according to the model you want to use. For example
# config.yml default: generator: rrdn # Use rrdn ... weights_paths: # Point to the rrdn weights file discriminator: generator: ./weights/rrdn-C4-D3-G32-G032-T10-x4_epoch299.hdf5
From the main folder run
docker run -v $(pwd)/data/:/home/isr/data -v $(pwd)/weights/:/home/isr/weights -v $(pwd)/config.yml:/home/isr/config.yml -it isr -p -d -c config.yml
Predict on AWS with nvidia-docker
From the remote machine run (using our DockerHub image)
sudo nvidia-docker run -v $(pwd)/isr/data/:/home/isr/data -v $(pwd)/isr/weights/:/home/isr/weights -v $(pwd)/isr/config.yml:/home/isr/config.yml -it idealo/image-super-resolution-gpu -p -d -c config.yml
Train either locally with (or without) Docker, or on the cloud with
nvidia-docker and AWS.
Add you training set, including training and validation Low Res and High Res folders, under
Train on AWS with GPU support using nvidia-docker
To train with the default settings set in
config.yml follow these steps:
1. From the main folder run
bash scripts/setup.sh -m <name-of-ec2-instance> -b -i -u -d <data_name>.
2. ssh into the machine
docker-machine ssh <name-of-ec2-instance>
3. Run training with
sudo nvidia-docker run -v $(pwd)/isr/data/:/home/isr/data -v $(pwd)/isr/logs/:/home/isr/logs -v $(pwd)/isr/weights/:/home/isr/weights -v $(pwd)/isr/config.yml:/home/isr/config.yml -it isr -t -d -c config.yml
<data_name> is the name of the folder containing your dataset. It must be under
The log folder is mounted on the docker image. Open another EC2 terminal and run
tensorboard --logdir /home/ubuntu/isr/logs
docker-machine ssh <name-of-ec2-instance> -N -L 6006:localhost:6006
A few helpful details
- DO NOT include a Tensorflow version in
requirements.txt as it would interfere with the version installed in the Tensorflow docker image
- DO NOT use
Ubuntu Server 18.04 LTS AMI. Use the
Ubuntu Server 16.04 LTS AMI instead
Train locally with docker
From the main project folder run
docker run -v $(pwd)/data/:/home/isr/data -v $(pwd)/logs/:/home/isr/logs -v $(pwd)/weights/:/home/isr/weights -v $(pwd)/isr/config.yml:/home/isr/config.yml -it isr -t -d -c config.yml