Tags: MisaOgura/flashtorch
Tags
Implement activation maximization (#7) * Break from the iteration as soon as the first conv layer is found * Extract common utils * Implement gradient ascent and optimization with adam * Iterate only twice for faster test speed * Test registration of the hooks to the right layers * Create method for visualisation * Test visualisation of one filter * Make the method exclusive for visualising one filter * Change the default learning rate for Adam to 0.01 * Accommodate for other int types * Create method for plotting random filters from a layer * Enable setting of lr and weight decay for Adam * Set img_size as the attribute of the class * Set with_adam as the attribute of the class * Test when num_subplots > total_num_filters * Create a single api entry point for plotting * Pass in a conv layer to visualise rather than layer idx * Reorganise public and private interfaces * Remove unnecesary class inheritance * Enable custom adjustment of color saturation and brightness * Remove gradient ascent with adam * Make sure to remove existing hooks before optimizing * Return every iteration of the optimization * Update docstrings and comments * Add docstrings * Make visualize function as a method of the Backprop class * Standardize the use of language * Set the title for each subplot * Create demo notebooks for activemax * Allow users to set plot titles * Enable the use of gpu * Use gpu in colab * Rename module * Use the new module name * Test execusion * Use z... * Bump version * Specify version in __init__.py * Add brief explanation and set kernel to python3 * Install flashtorch * Use gpu * Add some comments on filter visualization * Add deepdream api * Add demo for deepdream * Don't pass in the lr to optimize, use the class attribute * Update docstring * Update example notebooks * Add explanation for deepdream in the example notebooks * Update README * Update an image * Update README * Fix wrong link
PreviousNext