Lucid is a collection of infrastructure and tools for research in neural network interpretability.
This repository is a fork of the original tensorflow repository, that has been modified to support tensorflow 2 since Colab does no longer support the %tensorflow_version 1.x
compatibility magic command.
Lucid is research code, not production code. We provide no guarantee it will work for your use case. Lucid is maintained by volunteers who are unable to provide significant technical support.
- 📓 Notebooks -- Get started without any setup!
- 📚 Reading -- Learn more about visualizing neural nets.
- 💬 Community -- Want to get involved? Please reach out!
- 🔧 Additional Information -- Licensing, code style, etc.
- 🔬 Start Doing Research! -- Want to get involved? We're trying to research openly!
- 📦 Visualize your own model -- How to import your own model for visualization
Start visualizing neural networks with no setup. The following notebooks run right from your browser, thanks to Colaboratory. It's a Jupyter notebook environment that requires no setup to use and runs entirely in the cloud.
You can run the notebooks on your local machine, too. Clone the repository and find them in the notebooks
subfolder. You will need to run a local instance of the Jupyter notebook environment to execute them.
-
This tutorial quickly introduces Lucid, a network for visualizing neural networks. Lucid is a kind of spiritual successor to DeepDream, but provides flexible abstractions so that it can be used for a wide range of interpretability research.
-
If you want to study techniques for visualizing and understanding neural networks, it's important to be able to try your experiments on multiple models.
Lucid is a library for visualizing neural networks. As of lucid v0.3, we provide a consistent API for interacting with 27 different vision models.
Notebooks corresponding to the Feature Visualization article
-
This notebook reproduces the negative channel visualizations shown in the article.
-
This notebook reproduces the neuron diversity visualizations shown in the article.
-
This notebook reproduces some visualizations created by interacting different neurons as described in the article.
-
This notebook describes the feature visualization algorithms used in the article.
Notebooks corresponding to the Building Blocks of Interpretability article
-
This notebook studies semantic dictionaries. The basic idea of semantic dictionaries is to marry neuron activations to visualizations of those neurons, transforming them from abstract vectors to something more meaningful to humans. Semantic dictionaries can also be applied to other bases, such as rotated versions of activations space that try to disentangle neurons.
-
This notebook studies activation grids a technique for visualizing how a network "understood" an image at a particular layer.
-
This notebook demonstrates Spatial Attribution, a technique for exploring how detectors a different spatial positions in the network effected its output.
-
This notebook demonstrates Channel Attribution, a technique for exploring how different detectors in the network effected its output.
-
This notebook demonstrates Neuron Groups, a technique for exploring how detectors a different spatial positions in the network effected its output. ...
Notebooks corresponding to the Differentiable Image Parameterizations article
-
This notebook uses the Lucid library to create a visualizations that interpolates between two feature visualizations.
-
This notebook uses Lucid to perform style transfer between two images, and show how different parameterizations affect that process.
-
Compositional Pattern Producing Network
This notebook uses Lucid to produce aesthetically pleasing feature visualizations using a Differentiable Image Parameterization called a Compositional Pattern Producing Network (CPPN).
-
Generating semi-transparent Feature Visualizations
This notebook uses Lucid to produce semi-transparent feature visualizations using a Differentiable Image Parameterization with an extra alpha channel.
-
This notebook uses Lucid to produce feature visualizations on 3D mesh surfaces by using a Differentiable Image Parameterization.
-
This notebook uses Lucid to implement style transfer from a textured 3D model and a style image onto a new texture for the 3D model by using a Differentiable Image Parameterization.
Notebooks corresponding to the Activation Atlas article
-
This notebook uses Lucid to produce feature inversion caricatures that are similar in spirit to the activation grids in The Building Blocks of Interpretability.
-
This notebook uses Lucid to produce grids showing how neurons different neurons interact.
- Feature Visualization
- The Building Blocks of Interpretability
- Using Artificial Intelligence to Augment Human Intelligence
- Visualizing Representations: Deep Learning and Human Beings
- Differentiable Image Parameterizations
- Activation Atlas
- Lessons from a year of Distill ML Research (Shan Carter, OpenVisConf)
- Machine Learning for Visualization (Ian Johnson, OpenVisConf)
We're in #proj-lucid
on the Distill slack (join link).
We'd love to see more people doing research in this space!
You may use this software under the Apache 2.0 License. See LICENSE.
This project is research code. It is not an official Google product.
Lucid requires tensorflow
, but does not explicitly depend on it in setup.py
. Due to the way tensorflow is packaged and some deficiencies in how pip handles dependencies, specifying either the GPU or the non-GPU version of tensorflow will conflict with the version of tensorflow your already may have installed.
If you don't want to add your own dependency on tensorflow, you can specify which tensorflow version you want lucid to install by selecting from extras_require
like so: lucid[tf]
or lucid[tf_gpu]
.
In actual practice, we recommend you use your already installed version of tensorflow.