A detailed tutorial covering the code in this repository: Turning design mockups into code with deep learning.
The neural network is built in three iterations. Starting with a Hello World version, followed by the main neural network layers, and ending by training it to generalize.
Note: only the Bootstrap version can generalize on new design mock-ups. It uses 16 domain-specific tokens which are translated into HTML/CSS. It has a 97% accuracy (BLEU 4-ngram greedy search). This version can be trained on a few GPUs. The raw HTML version has potential to generalize, but is still unproven and requires a significant amount of GPUs to train.
A quick overview of the process:
1) Give a design image to the trained neural network
2) The neural network converts the image into HTML markup
3) Rendered output
pip install keras tensorflow pillow h5py jupyter
git clone https://github.com/emilwallner/Screenshot-to-code-in-Keras cd Screenshot-to-code-in-Keras/ jupyter notebook
Go do the desired notebook, files that end with '.ipynb'. To run the model, go to the menu then click on Cell > Run all
The final version, the Bootstrap version, is prepared with a small set to test run the model. If you want to try it with all the data, you need to download the data here: https://www.floydhub.com/emilwallner/datasets/imagetocode, and specify the correct
| |-Bootstrap #The Bootstrap version | | |-compiler #A compiler to turn the tokens to HTML/CSS (by pix2code) | | |-resources | | | |-eval_light #10 test images and markup | |-Hello_world #The Hello World version | |-HTML #The HTML version | | |-Resources_for_index_file #CSS,images and scripts to test index.html file | | |-html #HTML files to train it on | | |-images #Screenshots for training |-readme_images #Images for the readme page