PyTorch implementation of Efficient Neural Architecture Search via Parameters Sharing.
ENAS reduce the computational requirement (GPU-hours) of Neural Architecture Search (NAS) by 1000x via parameter sharing between models that are subgraphs within a large computational graph. SOTA on Penn Treebank
language modeling.
**[Caveat] Use official code from the authors: link**
- Python 3.6+
- PyTorch
- tqdm, scipy, imageio
Install prerequisites with:
conda install graphviz
pip install -r requirements.txt
Efficient Neural Architecture Search (ENAS) is composed of two sets of learnable parameters, controller LSTM θ and the shared parameters ω. These two parameters are alternatively trained and only trained controller is used to derive novel architectures.
(in progress)
(in progress)
- Neural Architecture Search with Reinforcement Learning
- Neural Optimizer Search with Reinforcement Learning
Taehoon Kim / @carpedm20