Skip to content

Commit

Permalink
Update dependencies for TF 2.4 upgrade (quic#1005)
Browse files Browse the repository at this point in the history
* Unpinned and updated dependency lists and docker files for TF 2.4 upgrade, updated install instructions, minor Examples readme update
Signed-off-by: Bharath Ramaswamy <[email protected]>

* Fix pylint warnings for tensorflow examples directory
Signed-off-by: Hitarth Mehta <[email protected]>
Co-authored-by: Hitarth Mehta <[email protected]>
  • Loading branch information
quic-bharathr authored Feb 7, 2022
1 parent 9dcbb43 commit 63fe58c
Show file tree
Hide file tree
Showing 27 changed files with 316 additions and 306 deletions.
6 changes: 3 additions & 3 deletions Examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ This section describes how to apply the various quantization and compression tec
- Cross Layer Equalization performs BatchNorm Folding, Cross Layer Scaling, and High Bias Fold
- Bias Correction corrects shift in layer outputs introduced due to quantization
- _Adaround (Adaptive Rounding) - [Torch](torch/quantization/adaround.py), [TensorFlow](tensorflow/quantization/ada_round.py)_:
- AdaRound is a weight-rounding mechanism for post-training quantization (PTQ) that adapts to the data and the task loss. AdaRound is computationally fast, needs only a small number of unlabeled examples (which may even be for a different dataset in the same domain), optimizes a local loss, does not require end-to-end finetuning, requires very little or no hyperparameter tuning for different networks and tasks, and can be applied to convolutional or fully connected layers without any modification. It complementary to most other post-training quantization techniques such as CLE, batch-normalization folding and high bias absorption.
- AdaRound is a weight-rounding mechanism for post-training quantization (PTQ) that adapts to the data and the task loss. AdaRound is computationally fast, needs only a small number of unlabeled examples (which may even be for a different dataset in the same domain), optimizes a local loss, does not require end-to-end finetuning, requires very little or no hyperparameter tuning for different networks and tasks, and can be applied to convolutional or fully connected layers without any modification. It complementary to most other post-training quantization techniques such as CLE, batch-normalization folding and high bias absorption.

### Quantization Examples
- _Quantization-aware Training - [Torch](torch/quantization/quantization_aware_training.py), [TensorFlow](tensorflow/quantization/qat.py)_:
Expand All @@ -62,8 +62,8 @@ This section describes how to apply the various quantization and compression tec
- Weight SVD is a tensor decomposition technique which decomposes one large layer (in terms of mac or memory) into two smaller layers. Given a neural network layer, with kernel (m,n,h,w) where m is the input channels, n the output channels, and h, w giving the height and width of the kernel itself, Weight SVD will decompose the kernel into one of size (m,k,1,1) and another of size (k,n,h,w), where k is called the rank. The smaller the value of k the larger the degree of compression achieved.

## Running Examples via Jupyter Notebook
- Install the Jupyter metapackage as follows:
`sudo -H python3 -m pip install jupyter`
- Install the Jupyter metapackage as follows (pre-pend with "sudo -H" if appropriate):
`python3 -m pip install jupyter`
- Start the notebook server as follows (please customize the command line options if appropriate):
`jupyter notebook --ip=* --no-browser &`
- The above command will generate and display a URL in the terminal. Copy and paste it into your browser.
Expand Down
2 changes: 1 addition & 1 deletion Examples/tensorflow/compression/channel_pruning.py
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ def compress_and_finetune(config: argparse.Namespace):
image_net_config.dataset['image_channels'])
tf.keras.backend.clear_session()
model = ResNet50(weights='imagenet', input_shape=input_shape)
sess = tf.keras.backend.get_session()
sess = tf.compat.v1.keras.backend.get_session()
add_image_net_computational_nodes_in_graph(sess, model.output, image_net_config.dataset['images_classes'])
update_ops_name = [op.name for op in model.updates]

Expand Down
2 changes: 1 addition & 1 deletion Examples/tensorflow/compression/spatial_svd.py
Original file line number Diff line number Diff line change
Expand Up @@ -253,7 +253,7 @@ def compress_and_finetune(config: argparse.Namespace):
image_net_config.dataset['image_channels'])
tf.keras.backend.clear_session()
model = ResNet50(weights='imagenet', input_shape=input_shape)
sess = tf.keras.backend.get_session()
sess = tf.compat.v1.keras.backend.get_session()
add_image_net_computational_nodes_in_graph(sess, model.output, image_net_config.dataset['images_classes'])
update_ops_name = [op.name for op in model.updates]

Expand Down
2 changes: 1 addition & 1 deletion Examples/tensorflow/compression/spatial_svd_cp.py
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,7 @@ def compress_and_finetune(config: argparse.Namespace):
tf.keras.backend.clear_session()
model = ResNet50(weights='imagenet', input_shape=input_shape)
logger.info("loaded model")
sess = tf.keras.backend.get_session()
sess = tf.compat.v1.keras.backend.get_session()
add_image_net_computational_nodes_in_graph(sess, model.output, image_net_config.dataset['images_classes'])
update_ops_name = [op.name for op in model.updates]

Expand Down
2 changes: 1 addition & 1 deletion Examples/tensorflow/quantization/ada_round.py
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,7 @@ def perform_adaround(config: argparse.Namespace):
image_net_config.dataset['image_channels'])
tf.keras.backend.clear_session()
model = ResNet50(weights='imagenet', input_shape=input_shape)
sess = tf.keras.backend.get_session()
sess = tf.compat.v1.keras.backend.get_session()
add_image_net_computational_nodes_in_graph(sess, model.output, image_net_config.dataset['images_classes'])

# 3. Calculates Model accuracy
Expand Down
2 changes: 1 addition & 1 deletion Examples/tensorflow/quantization/cle_bc.py
Original file line number Diff line number Diff line change
Expand Up @@ -299,7 +299,7 @@ def perform_cle_bc(config: argparse.Namespace):
image_net_config.dataset['image_channels'])
tf.keras.backend.clear_session()
model = MobileNet(weights='imagenet', input_shape=input_shape)
sess = tf.keras.backend.get_session()
sess = tf.compat.v1.keras.backend.get_session()
add_image_net_computational_nodes_in_graph(sess, model.output, image_net_config.dataset['images_classes'])

# 3. Calculates Model accuracy
Expand Down
2 changes: 1 addition & 1 deletion Examples/tensorflow/quantization/qat.py
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,7 @@ def perform_qat(config: argparse.Namespace):
image_net_config.dataset['image_channels'])
tf.keras.backend.clear_session()
model = ResNet50(weights='imagenet', input_shape=input_shape)
sess = tf.keras.backend.get_session()
sess = tf.compat.v1.keras.backend.get_session()
add_image_net_computational_nodes_in_graph(sess, model.output, image_net_config.dataset['images_classes'])
update_ops_name = [op.name for op in model.updates]

Expand Down
2 changes: 1 addition & 1 deletion Examples/tensorflow/quantization/range_learning.py
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,7 @@ def aimet_range_learning(config: argparse.Namespace):
image_net_config.dataset['image_channels'])
tf.keras.backend.clear_session()
model = ResNet50(weights='imagenet', input_shape=input_shape)
sess = tf.keras.backend.get_session()
sess = tf.compat.v1.keras.backend.get_session()
add_image_net_computational_nodes_in_graph(sess, model.output, image_net_config.dataset['images_classes'])
update_ops_name = [op.name for op in model.updates]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,4 +77,4 @@ def add_image_net_computational_nodes_in_graph(session: tf.Session, logits: tf.p
name='top5-acc')

# loss Op: loss
loss = tf.reduce_mean(tf.losses.softmax_cross_entropy(onehot_labels=y, logits=y_hat))
loss = tf.reduce_mean(tf.compat.v1.losses.softmax_cross_entropy(onehot_labels=y, logits=y_hat))
2 changes: 1 addition & 1 deletion Examples/tensorflow/utils/image_net_data_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ def parse(self, serialized_example: tf.python.ops.Tensor) -> Tuple[tf.python.ops
labels = tf.one_hot(indices=label, depth=image_net_config.dataset['images_classes'])

# Decode the jpeg
with tf.name_scope('prep_image', values=[image_data], default_name=None):
with tf.compat.v1.name_scope('prep_image', values=[image_data], default_name=None):
# decode and reshape to default self._image_size x self._image_size
# pylint: disable=no-member
image = tf.image.decode_jpeg(image_data, channels=image_net_config.dataset['image_channels'])
Expand Down
18 changes: 9 additions & 9 deletions Examples/tensorflow/utils/image_net_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -182,22 +182,22 @@ def train(self, session: tf.Session, update_ops_name: List[str] = None, iteratio
with session.graph.as_default():
loss_op = tf.get_collection(tf.GraphKeys.LOSSES)[0]

global_step_op = tf.train.get_global_step()
global_step_op = tf.compat.v1.train.get_global_step()
if global_step_op is None:
global_step_op = tf.train.create_global_step()
global_step_op = tf.compat.v1.train.create_global_step()

if decay_steps:
learning_rate_op = tf.train.exponential_decay(learning_rate,
global_step=global_step_op,
decay_steps=decay_steps * iterations,
decay_rate=decay_rate,
staircase=True,
name='exponential_decay_learning_rate')
learning_rate_op = tf.compat.v1.train.exponential_decay(learning_rate,
global_step=global_step_op,
decay_steps=decay_steps * iterations,
decay_rate=decay_rate,
staircase=True,
name='exponential_decay_learning_rate')
else:
learning_rate_op = learning_rate

# Define an optimizer
optimizer_op = tf.train.MomentumOptimizer(learning_rate=learning_rate_op, momentum=0.9)
optimizer_op = tf.compat.v1.train.MomentumOptimizer(learning_rate=learning_rate_op, momentum=0.9)

# Ensures that we execute the update_ops before performing the train_op
update_ops = set(update_ops).union(tf.get_collection(tf.GraphKeys.UPDATE_OPS))
Expand Down
69 changes: 36 additions & 33 deletions Jenkins/Dockerfile.tf-cpu
Original file line number Diff line number Diff line change
Expand Up @@ -63,29 +63,29 @@ RUN sudo update-ca-certificates
# Add sudo support
RUN echo "%users ALL = (ALL) NOPASSWD: ALL" >> /etc/sudoers

RUN apt-get update > /dev/null && \
RUN apt-get update -y > /dev/null && \
apt-get install --no-install-recommends -y \

# Python
python \
python3-dev \
python3.6 \
python3.6-dev \
python3-pip \

# lmdb depenedency
# lmdb dependency
libffi-dev && \
rm -rf /var/lib/apt/lists/*

# Python 2 pip installation
RUN apt-get update && apt-get install -y python-pip && rm -rf /var/lib/apt/lists/* && \
python2.7 -m pip --no-cache-dir install --upgrade \
pip==20.3.4 \
restkit==4.2.2
restkit

# Upgrade Python3 pip and install some more packages
RUN pip3 --no-cache-dir install --upgrade \
setuptools==41.0.1 \
RUN python3 -m pip --no-cache-dir install --upgrade \
pip==20.2.4 \
numpy==1.16.6 \
'numpy>=1.19.5' \
setuptools==41.0.1 \
wheel==0.33.4

ENV DEBIAN_FRONTEND=noninteractive
Expand Down Expand Up @@ -118,51 +118,52 @@ RUN apt-get update > /dev/null && \
rm -rf /var/lib/apt/lists/*

# Python3 Packages
RUN pip3 --no-cache-dir install \
RUN python3 -m pip --no-cache-dir install \
astroid==2.5.3 \
attrs==19.1.0 \
behave==1.2.6 \
bert-tensorflow==1.0.1 \
bert-tensorflow \
blosc==1.8.1 \
cffi==1.12.3 \
click==7.0 \
click \
cython==0.29.10 \
dataclasses \
Deprecated==1.2.12 \
Deprecated \
docutils==0.16 \
h5py==2.9.0 \
ipykernel==4.8.2 \
ipykernel \
Jinja2>=2.9 \
jupyter \
keras==2.2.4 \
lmdb==0.95 \
opencv-python==4.1.0.25 \
pillow==6.2.1 \
Pillow==8.4.0 \
pluggy==0.12.0 \
progressbar2 \
protobuf==3.7.1 \
psutil==5.8.0 \
ptflops==0.6.4 \
pybind11==2.6.1 \
pyDOE2==1.3.0 \
protobuf \
psutil \
ptflops \
pybind11 \
pyDOE2 \
pylint==2.3.1 \
pymoo==0.4.1 \
pymoo \
pytest==4.6.5 \
pytest-cov==2.6.1 \
scikit-image==0.15.0 \
scikit-learn==0.21.0 \
scipy==1.2.1 \
scikit-learn \
'scipy>=1.2.1' \
sphinx==2.1.1 \
sphinx-jinja==1.1.1 \
sphinx-autodoc-typehints==1.6.0 \
tensorboard==1.15 \
tensorboardX==1.7 \
tensorflow==1.15.0 \
tensorflow-cpu==2.4.3 \
tensorflow-hub \
tensorflow-model-optimization \
tensorlayer==2.1.0 \
timm==0.3.1 \
tqdm==4.32.2 \
wget==3.2 && \
tqdm \
transformers==4.10.3 \
wget && \
python3 -m ipykernel.kernelspec

RUN cd /tmp && \
Expand All @@ -173,6 +174,7 @@ RUN cd /tmp && \
RUN ln -s /opt/cmake/bin/cmake /usr/local/bin/cmake
RUN ln -s /opt/cmake/bin/ctest /usr/local/bin/ctest
RUN ln -s /opt/cmake/bin/cpack /usr/local/bin/cpack

ENV PATH=/usr/local/bin:$PATH

# Opencv
Expand All @@ -198,24 +200,25 @@ RUN apt-get update && apt-get install -y libjpeg8-dev && \
RUN ln -sf /usr/bin/python3.6 /usr/bin/python
RUN ln -s /usr/lib/x86_64-linux-gnu/libjpeg.so /usr/lib

# Remove pillow and replace with pillow-simd.
RUN pip3 uninstall -y Pillow && pip3 install Pillow-SIMD==6.0.0.post0

RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config && \
sed -i 's/Port 22/Port 25000/' /etc/ssh/sshd_config

# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd

# Clone the tensorflow repo to enable development
RUN cd / && git clone --depth 1 --single-branch --branch v1.15.0 https://github.com/tensorflow/tensorflow.git
RUN cd / && git clone --depth 1 --single-branch --branch v2.4.3 https://github.com/tensorflow/tensorflow.git

RUN pip install git-pylint-commit-hook osqp==0.6.1 onnx==1.8.1
RUN python3 -m pip install git-pylint-commit-hook osqp onnx==1.8.1

# NOTE: We need to pin the holoviews version to this since the latest version has a circular dependency on bokeh 2.0.0 through the panel package
RUN pip install holoviews==1.12.7 netron jsonschema pandas
RUN python3 -m pip install holoviews==1.12.7 netron jsonschema pandas

RUN python3 -m pip install bokeh==1.2.0 hvplot==0.4.0

RUN pip install bokeh==1.2.0 hvplot==0.4.0
# Remove existing Pillow & Pillow-SIMD and replace with correct version of Pillow-SIMD.
RUN python3 -m pip uninstall -y Pillow Pillow-SIMD
RUN python3 -m pip --no-cache-dir install Pillow-SIMD==7.0.0.post3

RUN apt-get update && apt-get install -y gnupg2
RUN wget -O - http://llvm.org/apt/llvm-snapshot.gpg.key|sudo apt-key add - && echo "deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic-11 main" >> /etc/apt/sources.list
Expand Down
Loading

0 comments on commit 63fe58c

Please sign in to comment.