Skip to content

PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"

License

Notifications You must be signed in to change notification settings

ashaw596/ENAS-pytorch

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Efficient Neural Architecture Search (ENAS) in PyTorch

PyTorch implementation of Efficient Neural Architecture Search via Parameters Sharing.

ENAS_rnn

ENAS reduce the computational requirement (GPU-hours) of Neural Architecture Search (NAS) by 1000x via parameter sharing between models that are subgraphs within a large computational graph. SOTA on Penn Treebank language modeling.

**[Caveat] Use official code from the authors: link**

Prerequisites

  • Python 3.6+
  • PyTorch
  • tqdm, scipy, imageio

Usage

Install prerequisites with:

conda install graphviz
pip install -r requirements.txt

Results

Efficient Neural Architecture Search (ENAS) is composed of two sets of learnable parameters, controller LSTM θ and the shared parameters ω. These two parameters are alternatively trained and only trained controller is used to derive novel architectures.

1. Discovering Convolutional Neural Networks

(in progress)

2. Designing Convolutional Cells

(in progress)

Reference

Author

Taehoon Kim / @carpedm20

About

PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%